Feb 13 19:32:10.382436 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:32:10.382456 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 17:39:57 -00 2025 Feb 13 19:32:10.382464 kernel: KASLR enabled Feb 13 19:32:10.382470 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 13 19:32:10.382477 kernel: printk: bootconsole [pl11] enabled Feb 13 19:32:10.382482 kernel: efi: EFI v2.7 by EDK II Feb 13 19:32:10.382489 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Feb 13 19:32:10.382495 kernel: random: crng init done Feb 13 19:32:10.382501 kernel: secureboot: Secure boot disabled Feb 13 19:32:10.382507 kernel: ACPI: Early table checksum verification disabled Feb 13 19:32:10.382513 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Feb 13 19:32:10.382519 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:32:10.382525 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:32:10.382532 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Feb 13 19:32:10.382540 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:32:10.382546 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:32:10.382552 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:32:10.382560 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:32:10.382566 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:32:10.382572 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:32:10.382579 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 13 19:32:10.382585 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:32:10.382591 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 13 19:32:10.382597 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Feb 13 19:32:10.382604 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Feb 13 19:32:10.382610 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Feb 13 19:32:10.382616 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Feb 13 19:32:10.382622 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Feb 13 19:32:10.382630 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Feb 13 19:32:10.382636 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Feb 13 19:32:10.382643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Feb 13 19:32:10.382649 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Feb 13 19:32:10.382655 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Feb 13 19:32:10.382661 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Feb 13 19:32:10.382667 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Feb 13 19:32:10.382674 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Feb 13 19:32:10.382680 kernel: Zone ranges: Feb 13 19:32:10.382686 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 13 19:32:10.382692 kernel: DMA32 empty Feb 13 19:32:10.382698 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 19:32:10.382708 kernel: Movable zone start for each node Feb 13 19:32:10.382715 kernel: Early memory node ranges Feb 13 19:32:10.382721 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 13 19:32:10.382728 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Feb 13 19:32:10.382735 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Feb 13 19:32:10.382742 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Feb 13 19:32:10.382749 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Feb 13 19:32:10.382756 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Feb 13 19:32:10.382762 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Feb 13 19:32:10.382769 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Feb 13 19:32:10.382775 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 19:32:10.382782 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 13 19:32:10.382788 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 13 19:32:10.382795 kernel: psci: probing for conduit method from ACPI. Feb 13 19:32:10.382801 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:32:10.382808 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:32:10.383849 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 13 19:32:10.383861 kernel: psci: SMC Calling Convention v1.4 Feb 13 19:32:10.383868 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Feb 13 19:32:10.383874 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Feb 13 19:32:10.383881 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:32:10.383888 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:32:10.383895 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:32:10.383902 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:32:10.383908 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:32:10.383915 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:32:10.383921 kernel: CPU features: detected: Spectre-BHB Feb 13 19:32:10.383928 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:32:10.383937 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:32:10.383943 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:32:10.383950 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 13 19:32:10.383957 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:32:10.383963 kernel: alternatives: applying boot alternatives Feb 13 19:32:10.383972 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:32:10.383979 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:32:10.383986 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:32:10.383993 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:32:10.383999 kernel: Fallback order for Node 0: 0 Feb 13 19:32:10.384006 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 13 19:32:10.384014 kernel: Policy zone: Normal Feb 13 19:32:10.384021 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:32:10.384027 kernel: software IO TLB: area num 2. Feb 13 19:32:10.384034 kernel: software IO TLB: mapped [mem 0x0000000036550000-0x000000003a550000] (64MB) Feb 13 19:32:10.384041 kernel: Memory: 3983652K/4194160K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 210508K reserved, 0K cma-reserved) Feb 13 19:32:10.384048 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:32:10.384055 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:32:10.384062 kernel: rcu: RCU event tracing is enabled. Feb 13 19:32:10.384069 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:32:10.384075 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:32:10.384082 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:32:10.384090 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:32:10.384097 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:32:10.384104 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:32:10.384110 kernel: GICv3: 960 SPIs implemented Feb 13 19:32:10.384117 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:32:10.384123 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:32:10.384130 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:32:10.384136 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 13 19:32:10.384143 kernel: ITS: No ITS available, not enabling LPIs Feb 13 19:32:10.384150 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:32:10.384156 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:32:10.384163 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:32:10.384171 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:32:10.384178 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:32:10.384185 kernel: Console: colour dummy device 80x25 Feb 13 19:32:10.384192 kernel: printk: console [tty1] enabled Feb 13 19:32:10.384199 kernel: ACPI: Core revision 20230628 Feb 13 19:32:10.384206 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:32:10.384213 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:32:10.384219 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:32:10.384226 kernel: landlock: Up and running. Feb 13 19:32:10.384235 kernel: SELinux: Initializing. Feb 13 19:32:10.384242 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:32:10.384249 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:32:10.384255 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:32:10.384262 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:32:10.384269 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 13 19:32:10.384276 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Feb 13 19:32:10.384289 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 19:32:10.384296 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:32:10.384304 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:32:10.384311 kernel: Remapping and enabling EFI services. Feb 13 19:32:10.384318 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:32:10.384327 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:32:10.384334 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 13 19:32:10.384341 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:32:10.384348 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:32:10.384355 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:32:10.384364 kernel: SMP: Total of 2 processors activated. Feb 13 19:32:10.384371 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:32:10.384378 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 13 19:32:10.384386 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:32:10.384393 kernel: CPU features: detected: CRC32 instructions Feb 13 19:32:10.384400 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:32:10.384407 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:32:10.384414 kernel: CPU features: detected: Privileged Access Never Feb 13 19:32:10.384422 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:32:10.384430 kernel: alternatives: applying system-wide alternatives Feb 13 19:32:10.384438 kernel: devtmpfs: initialized Feb 13 19:32:10.384445 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:32:10.384452 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:32:10.384459 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:32:10.384466 kernel: SMBIOS 3.1.0 present. Feb 13 19:32:10.384474 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Feb 13 19:32:10.384481 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:32:10.384488 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:32:10.384497 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:32:10.384504 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:32:10.384512 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:32:10.384519 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Feb 13 19:32:10.384526 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:32:10.384533 kernel: cpuidle: using governor menu Feb 13 19:32:10.384540 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:32:10.384547 kernel: ASID allocator initialised with 32768 entries Feb 13 19:32:10.384554 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:32:10.384563 kernel: Serial: AMBA PL011 UART driver Feb 13 19:32:10.384571 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:32:10.384578 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:32:10.384585 kernel: Modules: 509280 pages in range for PLT usage Feb 13 19:32:10.384592 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:32:10.384599 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:32:10.384607 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:32:10.384614 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:32:10.384621 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:32:10.384630 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:32:10.384637 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:32:10.384644 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:32:10.384651 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:32:10.384658 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:32:10.384665 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:32:10.384673 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:32:10.384680 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:32:10.384687 kernel: ACPI: Interpreter enabled Feb 13 19:32:10.384696 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:32:10.384703 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:32:10.384710 kernel: printk: console [ttyAMA0] enabled Feb 13 19:32:10.384717 kernel: printk: bootconsole [pl11] disabled Feb 13 19:32:10.384725 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 13 19:32:10.384732 kernel: iommu: Default domain type: Translated Feb 13 19:32:10.384739 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:32:10.384746 kernel: efivars: Registered efivars operations Feb 13 19:32:10.384753 kernel: vgaarb: loaded Feb 13 19:32:10.384762 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:32:10.384770 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:32:10.384777 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:32:10.384784 kernel: pnp: PnP ACPI init Feb 13 19:32:10.384791 kernel: pnp: PnP ACPI: found 0 devices Feb 13 19:32:10.384798 kernel: NET: Registered PF_INET protocol family Feb 13 19:32:10.384805 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:32:10.384822 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:32:10.384830 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:32:10.384839 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:32:10.384846 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:32:10.384860 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:32:10.384867 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:32:10.384874 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:32:10.384882 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:32:10.384889 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:32:10.384896 kernel: kvm [1]: HYP mode not available Feb 13 19:32:10.384903 kernel: Initialise system trusted keyrings Feb 13 19:32:10.384912 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:32:10.384919 kernel: Key type asymmetric registered Feb 13 19:32:10.384926 kernel: Asymmetric key parser 'x509' registered Feb 13 19:32:10.384933 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:32:10.384940 kernel: io scheduler mq-deadline registered Feb 13 19:32:10.384948 kernel: io scheduler kyber registered Feb 13 19:32:10.384955 kernel: io scheduler bfq registered Feb 13 19:32:10.384962 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:32:10.384969 kernel: thunder_xcv, ver 1.0 Feb 13 19:32:10.384977 kernel: thunder_bgx, ver 1.0 Feb 13 19:32:10.384984 kernel: nicpf, ver 1.0 Feb 13 19:32:10.384991 kernel: nicvf, ver 1.0 Feb 13 19:32:10.385132 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:32:10.385205 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:32:09 UTC (1739475129) Feb 13 19:32:10.385215 kernel: efifb: probing for efifb Feb 13 19:32:10.385222 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 19:32:10.385229 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 19:32:10.385239 kernel: efifb: scrolling: redraw Feb 13 19:32:10.385246 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 19:32:10.385253 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 19:32:10.385260 kernel: fb0: EFI VGA frame buffer device Feb 13 19:32:10.385268 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 13 19:32:10.385275 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:32:10.385282 kernel: No ACPI PMU IRQ for CPU0 Feb 13 19:32:10.385289 kernel: No ACPI PMU IRQ for CPU1 Feb 13 19:32:10.385296 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 13 19:32:10.385305 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:32:10.385312 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:32:10.385319 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:32:10.385326 kernel: Segment Routing with IPv6 Feb 13 19:32:10.385334 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:32:10.385341 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:32:10.385348 kernel: Key type dns_resolver registered Feb 13 19:32:10.385355 kernel: registered taskstats version 1 Feb 13 19:32:10.385362 kernel: Loading compiled-in X.509 certificates Feb 13 19:32:10.385371 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 58bec1a0c6b8a133d1af4ea745973da0351f7027' Feb 13 19:32:10.385378 kernel: Key type .fscrypt registered Feb 13 19:32:10.385385 kernel: Key type fscrypt-provisioning registered Feb 13 19:32:10.385392 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:32:10.385399 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:32:10.385406 kernel: ima: No architecture policies found Feb 13 19:32:10.385413 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:32:10.385421 kernel: clk: Disabling unused clocks Feb 13 19:32:10.385428 kernel: Freeing unused kernel memory: 38336K Feb 13 19:32:10.385437 kernel: Run /init as init process Feb 13 19:32:10.385444 kernel: with arguments: Feb 13 19:32:10.385451 kernel: /init Feb 13 19:32:10.385458 kernel: with environment: Feb 13 19:32:10.385465 kernel: HOME=/ Feb 13 19:32:10.385472 kernel: TERM=linux Feb 13 19:32:10.385479 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:32:10.385487 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:32:10.385498 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:32:10.385506 systemd[1]: Detected virtualization microsoft. Feb 13 19:32:10.385514 systemd[1]: Detected architecture arm64. Feb 13 19:32:10.385521 systemd[1]: Running in initrd. Feb 13 19:32:10.385528 systemd[1]: No hostname configured, using default hostname. Feb 13 19:32:10.385536 systemd[1]: Hostname set to <localhost>. Feb 13 19:32:10.385544 systemd[1]: Initializing machine ID from random generator. Feb 13 19:32:10.385551 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:32:10.385560 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:32:10.385568 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:32:10.385576 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:32:10.385584 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:32:10.385592 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:32:10.385601 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:32:10.385609 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:32:10.385619 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:32:10.385627 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:32:10.385635 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:32:10.385642 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:32:10.385650 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:32:10.385657 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:32:10.385665 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:32:10.385673 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:32:10.385683 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:32:10.385691 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:32:10.385698 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:32:10.385706 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:32:10.385714 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:32:10.385721 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:32:10.385729 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:32:10.385737 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:32:10.385745 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:32:10.385754 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:32:10.385761 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:32:10.385769 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:32:10.385777 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:32:10.385801 systemd-journald[218]: Collecting audit messages is disabled. Feb 13 19:32:10.392235 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:32:10.392248 systemd-journald[218]: Journal started Feb 13 19:32:10.392269 systemd-journald[218]: Runtime Journal (/run/log/journal/fb2b3112963346b3ab478f4796ccb9d2) is 8M, max 78.5M, 70.5M free. Feb 13 19:32:10.392947 systemd-modules-load[220]: Inserted module 'overlay' Feb 13 19:32:10.409411 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:32:10.410285 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:32:10.437369 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:32:10.463274 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:32:10.463296 kernel: Bridge firewalling registered Feb 13 19:32:10.456104 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:32:10.462659 systemd-modules-load[220]: Inserted module 'br_netfilter' Feb 13 19:32:10.469214 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:32:10.481834 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:10.505026 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:32:10.521990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:32:10.528974 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:32:10.559376 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:32:10.576842 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:32:10.586847 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:32:10.600156 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:32:10.613408 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:32:10.641061 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:32:10.649988 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:32:10.678961 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:32:10.695172 dracut-cmdline[252]: dracut-dracut-053 Feb 13 19:32:10.701573 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:32:10.702235 systemd-resolved[255]: Positive Trust Anchors: Feb 13 19:32:10.702244 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:32:10.702273 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:32:10.706279 systemd-resolved[255]: Defaulting to hostname 'linux'. Feb 13 19:32:10.717019 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:32:10.750982 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:32:10.808870 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:32:10.882834 kernel: SCSI subsystem initialized Feb 13 19:32:10.893827 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:32:10.901832 kernel: iscsi: registered transport (tcp) Feb 13 19:32:10.920179 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:32:10.920240 kernel: QLogic iSCSI HBA Driver Feb 13 19:32:10.953101 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:32:10.966199 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:32:10.999189 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:32:10.999221 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:32:11.006846 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:32:11.056841 kernel: raid6: neonx8 gen() 15751 MB/s Feb 13 19:32:11.075833 kernel: raid6: neonx4 gen() 15821 MB/s Feb 13 19:32:11.095822 kernel: raid6: neonx2 gen() 13274 MB/s Feb 13 19:32:11.116829 kernel: raid6: neonx1 gen() 10492 MB/s Feb 13 19:32:11.136823 kernel: raid6: int64x8 gen() 6792 MB/s Feb 13 19:32:11.156825 kernel: raid6: int64x4 gen() 7354 MB/s Feb 13 19:32:11.177823 kernel: raid6: int64x2 gen() 6114 MB/s Feb 13 19:32:11.201615 kernel: raid6: int64x1 gen() 5061 MB/s Feb 13 19:32:11.201635 kernel: raid6: using algorithm neonx4 gen() 15821 MB/s Feb 13 19:32:11.228612 kernel: raid6: .... xor() 12428 MB/s, rmw enabled Feb 13 19:32:11.228624 kernel: raid6: using neon recovery algorithm Feb 13 19:32:11.240226 kernel: xor: measuring software checksum speed Feb 13 19:32:11.240250 kernel: 8regs : 21613 MB/sec Feb 13 19:32:11.243747 kernel: 32regs : 21636 MB/sec Feb 13 19:32:11.247234 kernel: arm64_neon : 27917 MB/sec Feb 13 19:32:11.251604 kernel: xor: using function: arm64_neon (27917 MB/sec) Feb 13 19:32:11.302853 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:32:11.312593 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:32:11.328966 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:32:11.353878 systemd-udevd[440]: Using default interface naming scheme 'v255'. Feb 13 19:32:11.360387 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:32:11.381021 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:32:11.397663 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation Feb 13 19:32:11.422472 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:32:11.440067 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:32:11.482382 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:32:11.504255 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:32:11.531201 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:32:11.543872 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:32:11.560024 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:32:11.576344 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:32:11.597991 kernel: hv_vmbus: Vmbus version:5.3 Feb 13 19:32:11.598146 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:32:11.628971 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:32:11.644176 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 19:32:11.644200 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 19:32:11.646844 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:32:11.718913 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 19:32:11.718939 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 19:32:11.718949 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 19:32:11.718958 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Feb 13 19:32:11.718968 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 19:32:11.718977 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 19:32:11.719110 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 19:32:11.719122 kernel: PTP clock support registered Feb 13 19:32:11.646999 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:32:11.748346 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 19:32:11.748381 kernel: hv_vmbus: registering driver hv_utils Feb 13 19:32:11.688959 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:32:11.766525 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 19:32:11.766548 kernel: scsi host0: storvsc_host_t Feb 13 19:32:11.766712 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 19:32:11.712344 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:32:12.254711 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 19:32:12.254734 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 19:32:12.254778 kernel: scsi host1: storvsc_host_t Feb 13 19:32:11.712557 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:11.726497 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:32:12.244232 systemd-resolved[255]: Clock change detected. Flushing caches. Feb 13 19:32:12.300534 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 19:32:12.251077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:32:12.285850 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:32:12.320398 kernel: hv_netvsc 002248b9-996e-0022-48b9-996e002248b9 eth0: VF slot 1 added Feb 13 19:32:12.285967 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:12.299196 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:32:12.335064 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:32:12.376334 kernel: hv_vmbus: registering driver hv_pci Feb 13 19:32:12.376357 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 19:32:12.376524 kernel: hv_pci adcdf33a-edae-44b3-bbf7-d1747b48404e: PCI VMBus probing: Using version 0x10004 Feb 13 19:32:12.471826 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:32:12.471849 kernel: hv_pci adcdf33a-edae-44b3-bbf7-d1747b48404e: PCI host bridge to bus edae:00 Feb 13 19:32:12.471957 kernel: pci_bus edae:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 13 19:32:12.472064 kernel: pci_bus edae:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 19:32:12.472142 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 19:32:12.472239 kernel: pci edae:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 13 19:32:12.472335 kernel: pci edae:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 19:32:12.472416 kernel: pci edae:00:02.0: enabling Extended Tags Feb 13 19:32:12.472497 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 19:32:12.494590 kernel: pci edae:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at edae:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 13 19:32:12.496829 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 19:32:12.496962 kernel: pci_bus edae:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 19:32:12.497051 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 19:32:12.497133 kernel: pci edae:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 19:32:12.497218 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 19:32:12.497300 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 19:32:12.497379 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:32:12.497389 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 19:32:12.452482 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:12.489861 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:32:12.534741 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:32:12.562046 kernel: mlx5_core edae:00:02.0: enabling device (0000 -> 0002) Feb 13 19:32:12.785405 kernel: mlx5_core edae:00:02.0: firmware version: 16.30.1284 Feb 13 19:32:12.785546 kernel: hv_netvsc 002248b9-996e-0022-48b9-996e002248b9 eth0: VF registering: eth1 Feb 13 19:32:12.785641 kernel: mlx5_core edae:00:02.0 eth1: joined to eth0 Feb 13 19:32:12.785824 kernel: mlx5_core edae:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Feb 13 19:32:12.793719 kernel: mlx5_core edae:00:02.0 enP60846s1: renamed from eth1 Feb 13 19:32:13.025907 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 19:32:13.091720 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (492) Feb 13 19:32:13.108818 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 19:32:13.140724 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 19:32:13.178012 kernel: BTRFS: device fsid 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (495) Feb 13 19:32:13.194420 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 19:32:13.203178 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 19:32:13.239919 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:32:13.270124 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:32:13.278713 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:32:14.290774 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:32:14.292255 disk-uuid[606]: The operation has completed successfully. Feb 13 19:32:14.348545 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:32:14.348647 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:32:14.399857 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:32:14.417837 sh[692]: Success Feb 13 19:32:14.451935 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:32:14.645788 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:32:14.674822 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:32:14.687940 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:32:14.736993 kernel: BTRFS info (device dm-0): first mount of filesystem 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 Feb 13 19:32:14.737044 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:32:14.748321 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:32:14.755020 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:32:14.760949 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:32:15.023484 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:32:15.029282 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:32:15.045918 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:32:15.059461 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:32:15.094027 kernel: BTRFS info (device sda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:32:15.094082 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:32:15.102053 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:32:15.135002 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:32:15.143315 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:32:15.162324 kernel: BTRFS info (device sda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:32:15.148728 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:32:15.172852 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:32:15.191985 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:32:15.199850 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:32:15.232894 systemd-networkd[874]: lo: Link UP Feb 13 19:32:15.236620 systemd-networkd[874]: lo: Gained carrier Feb 13 19:32:15.238326 systemd-networkd[874]: Enumeration completed Feb 13 19:32:15.238418 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:32:15.239037 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:32:15.239041 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:32:15.245240 systemd[1]: Reached target network.target - Network. Feb 13 19:32:15.334714 kernel: mlx5_core edae:00:02.0 enP60846s1: Link up Feb 13 19:32:15.372852 kernel: hv_netvsc 002248b9-996e-0022-48b9-996e002248b9 eth0: Data path switched to VF: enP60846s1 Feb 13 19:32:15.372754 systemd-networkd[874]: enP60846s1: Link UP Feb 13 19:32:15.374755 systemd-networkd[874]: eth0: Link UP Feb 13 19:32:15.374902 systemd-networkd[874]: eth0: Gained carrier Feb 13 19:32:15.374911 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:32:15.386239 systemd-networkd[874]: enP60846s1: Gained carrier Feb 13 19:32:15.413737 systemd-networkd[874]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 19:32:16.129814 ignition[878]: Ignition 2.20.0 Feb 13 19:32:16.129825 ignition[878]: Stage: fetch-offline Feb 13 19:32:16.134460 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:32:16.129861 ignition[878]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:16.129869 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:32:16.129966 ignition[878]: parsed url from cmdline: "" Feb 13 19:32:16.129969 ignition[878]: no config URL provided Feb 13 19:32:16.129973 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:32:16.161862 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:32:16.129980 ignition[878]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:32:16.129984 ignition[878]: failed to fetch config: resource requires networking Feb 13 19:32:16.130155 ignition[878]: Ignition finished successfully Feb 13 19:32:16.178850 ignition[886]: Ignition 2.20.0 Feb 13 19:32:16.178858 ignition[886]: Stage: fetch Feb 13 19:32:16.179014 ignition[886]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:16.179023 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:32:16.179112 ignition[886]: parsed url from cmdline: "" Feb 13 19:32:16.179115 ignition[886]: no config URL provided Feb 13 19:32:16.179119 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:32:16.179127 ignition[886]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:32:16.179149 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 19:32:16.304851 ignition[886]: GET result: OK Feb 13 19:32:16.304946 ignition[886]: config has been read from IMDS userdata Feb 13 19:32:16.304986 ignition[886]: parsing config with SHA512: 73bccacf4c0a2e5da4b8dabf0fd2b86de02ec81d9fa0e54e54f8ac5db1d3353de5b2771dd86af46c39f67513bb8663e2e85d1422984f52a2bd0bb1c72dcb79e1 Feb 13 19:32:16.313952 unknown[886]: fetched base config from "system" Feb 13 19:32:16.313966 unknown[886]: fetched base config from "system" Feb 13 19:32:16.315079 ignition[886]: fetch: fetch complete Feb 13 19:32:16.313973 unknown[886]: fetched user config from "azure" Feb 13 19:32:16.315089 ignition[886]: fetch: fetch passed Feb 13 19:32:16.317081 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:32:16.315138 ignition[886]: Ignition finished successfully Feb 13 19:32:16.334338 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:32:16.357120 ignition[892]: Ignition 2.20.0 Feb 13 19:32:16.363622 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:32:16.357126 ignition[892]: Stage: kargs Feb 13 19:32:16.357292 ignition[892]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:16.357300 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:32:16.358207 ignition[892]: kargs: kargs passed Feb 13 19:32:16.385925 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:32:16.358248 ignition[892]: Ignition finished successfully Feb 13 19:32:16.412765 ignition[898]: Ignition 2.20.0 Feb 13 19:32:16.413352 ignition[898]: Stage: disks Feb 13 19:32:16.413563 ignition[898]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:16.421726 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:32:16.413573 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:32:16.431300 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:32:16.414586 ignition[898]: disks: disks passed Feb 13 19:32:16.442560 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:32:16.414625 ignition[898]: Ignition finished successfully Feb 13 19:32:16.455548 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:32:16.466868 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:32:16.475818 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:32:16.507968 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:32:16.565685 systemd-fsck[906]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 19:32:16.571756 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:32:16.588856 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:32:16.644005 systemd-networkd[874]: enP60846s1: Gained IPv6LL Feb 13 19:32:16.648745 kernel: EXT4-fs (sda9): mounted filesystem 24882d04-b1a5-4a27-95f1-925956e69b18 r/w with ordered data mode. Quota mode: none. Feb 13 19:32:16.646124 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:32:16.653607 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:32:16.698775 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:32:16.709349 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:32:16.721402 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 19:32:16.741712 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (917) Feb 13 19:32:16.749100 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:32:16.779374 kernel: BTRFS info (device sda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:32:16.779397 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:32:16.779407 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:32:16.749142 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:32:16.763931 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:32:16.803713 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:32:16.804988 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:32:16.818298 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:32:17.318527 coreos-metadata[919]: Feb 13 19:32:17.318 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 19:32:17.331044 coreos-metadata[919]: Feb 13 19:32:17.330 INFO Fetch successful Feb 13 19:32:17.338565 coreos-metadata[919]: Feb 13 19:32:17.333 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 19:32:17.351905 coreos-metadata[919]: Feb 13 19:32:17.351 INFO Fetch successful Feb 13 19:32:17.369125 coreos-metadata[919]: Feb 13 19:32:17.367 INFO wrote hostname ci-4230.0.1-a-2ba2208742 to /sysroot/etc/hostname Feb 13 19:32:17.380788 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 19:32:17.407777 systemd-networkd[874]: eth0: Gained IPv6LL Feb 13 19:32:17.607509 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:32:17.642053 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:32:17.651122 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:32:17.659807 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:32:18.329758 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:32:18.343974 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:32:18.351841 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:32:18.377183 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:32:18.383600 kernel: BTRFS info (device sda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:32:18.401722 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:32:18.415029 ignition[1037]: INFO : Ignition 2.20.0 Feb 13 19:32:18.420730 ignition[1037]: INFO : Stage: mount Feb 13 19:32:18.420730 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:18.420730 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:32:18.420730 ignition[1037]: INFO : mount: mount passed Feb 13 19:32:18.420730 ignition[1037]: INFO : Ignition finished successfully Feb 13 19:32:18.417409 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:32:18.448789 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:32:18.464967 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:32:18.511375 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1049) Feb 13 19:32:18.511439 kernel: BTRFS info (device sda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:32:18.522305 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:32:18.522349 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:32:18.528710 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:32:18.530643 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:32:18.552686 ignition[1067]: INFO : Ignition 2.20.0 Feb 13 19:32:18.552686 ignition[1067]: INFO : Stage: files Feb 13 19:32:18.560709 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:18.560709 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:32:18.560709 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:32:18.581854 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:32:18.581854 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:32:18.632926 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:32:18.640703 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:32:18.640703 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:32:18.633315 unknown[1067]: wrote ssh authorized keys file for user: core Feb 13 19:32:18.661651 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:32:18.661651 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:32:18.742597 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:32:18.852105 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:32:18.852105 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:32:18.873320 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:32:19.286592 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:32:19.360871 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:32:19.360871 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:32:19.380965 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:32:19.380965 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:32:19.380965 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:32:19.380965 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:32:19.380965 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:32:19.380965 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:32:19.380965 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:32:19.380965 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:32:19.380965 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:32:19.380965 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:32:19.380965 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:32:19.380965 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:32:19.380965 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 19:32:19.773840 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:32:20.012321 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:32:20.012321 ignition[1067]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:32:20.045060 ignition[1067]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:32:20.058004 ignition[1067]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:32:20.058004 ignition[1067]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:32:20.058004 ignition[1067]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:32:20.058004 ignition[1067]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:32:20.058004 ignition[1067]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:32:20.058004 ignition[1067]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:32:20.058004 ignition[1067]: INFO : files: files passed Feb 13 19:32:20.058004 ignition[1067]: INFO : Ignition finished successfully Feb 13 19:32:20.058420 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:32:20.102844 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:32:20.135879 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:32:20.159343 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:32:20.188949 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:32:20.188949 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:32:20.159429 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:32:20.219248 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:32:20.176944 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:32:20.184325 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:32:20.216373 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:32:20.249878 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:32:20.251731 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:32:20.262098 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:32:20.273040 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:32:20.285577 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:32:20.305964 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:32:20.317122 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:32:20.341093 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:32:20.360401 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:32:20.367937 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:32:20.381054 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:32:20.392559 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:32:20.392688 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:32:20.409538 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:32:20.415871 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:32:20.427554 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:32:20.439115 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:32:20.450672 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:32:20.463282 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:32:20.475106 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:32:20.487809 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:32:20.499236 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:32:20.512158 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:32:20.522014 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:32:20.522151 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:32:20.537511 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:32:20.544020 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:32:20.555953 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:32:20.561598 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:32:20.568787 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:32:20.568912 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:32:20.586995 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:32:20.587128 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:32:20.594438 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:32:20.594544 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:32:20.606272 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 19:32:20.606383 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 19:32:20.698023 ignition[1119]: INFO : Ignition 2.20.0 Feb 13 19:32:20.698023 ignition[1119]: INFO : Stage: umount Feb 13 19:32:20.698023 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:20.698023 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:32:20.698023 ignition[1119]: INFO : umount: umount passed Feb 13 19:32:20.698023 ignition[1119]: INFO : Ignition finished successfully Feb 13 19:32:20.639943 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:32:20.664322 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:32:20.676897 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:32:20.677119 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:32:20.691408 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:32:20.691570 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:32:20.707965 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:32:20.708068 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:32:20.721426 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:32:20.731887 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:32:20.731980 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:32:20.741399 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:32:20.741479 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:32:20.751511 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:32:20.751574 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:32:20.763004 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:32:20.763053 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:32:20.774047 systemd[1]: Stopped target network.target - Network. Feb 13 19:32:20.784582 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:32:20.784647 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:32:20.798104 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:32:20.812802 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:32:20.818792 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:32:20.826984 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:32:20.838475 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:32:20.850097 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:32:20.850142 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:32:20.865828 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:32:20.865865 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:32:20.877517 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:32:20.877574 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:32:20.888655 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:32:20.888718 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:32:20.901906 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:32:20.913970 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:32:20.933394 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:32:20.933829 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:32:20.957377 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:32:21.162183 kernel: hv_netvsc 002248b9-996e-0022-48b9-996e002248b9 eth0: Data path switched from VF: enP60846s1 Feb 13 19:32:20.957610 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:32:20.957715 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:32:20.979839 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:32:20.980512 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:32:20.980583 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:32:21.011883 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:32:21.021514 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:32:21.021591 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:32:21.034057 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:32:21.034112 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:32:21.049250 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:32:21.049300 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:32:21.055638 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:32:21.055680 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:32:21.074248 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:32:21.084891 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:32:21.084964 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:32:21.129574 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:32:21.129766 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:32:21.142071 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:32:21.142116 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:32:21.162462 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:32:21.162497 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:32:21.174636 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:32:21.174710 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:32:21.193788 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:32:21.193851 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:32:21.204888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:32:21.204956 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:32:21.238904 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:32:21.251770 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:32:21.251842 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:32:21.271378 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:32:21.271437 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:32:21.278633 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:32:21.278683 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:32:21.291822 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:32:21.291863 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:21.514579 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Feb 13 19:32:21.311646 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 19:32:21.311720 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:32:21.312094 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:32:21.312207 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:32:21.323739 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:32:21.323823 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:32:21.336152 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:32:21.336227 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:32:21.350667 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:32:21.361018 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:32:21.361094 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:32:21.392127 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:32:21.412603 systemd[1]: Switching root. Feb 13 19:32:21.597613 systemd-journald[218]: Journal stopped Feb 13 19:32:27.972977 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:32:27.973001 kernel: SELinux: policy capability open_perms=1 Feb 13 19:32:27.973011 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:32:27.973019 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:32:27.973028 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:32:27.973036 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:32:27.973045 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:32:27.973053 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:32:27.973060 kernel: audit: type=1403 audit(1739475142.475:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:32:27.973070 systemd[1]: Successfully loaded SELinux policy in 163.972ms. Feb 13 19:32:27.973081 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.416ms. Feb 13 19:32:27.973091 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:32:27.973099 systemd[1]: Detected virtualization microsoft. Feb 13 19:32:27.973108 systemd[1]: Detected architecture arm64. Feb 13 19:32:27.973117 systemd[1]: Detected first boot. Feb 13 19:32:27.973128 systemd[1]: Hostname set to <ci-4230.0.1-a-2ba2208742>. Feb 13 19:32:27.973136 systemd[1]: Initializing machine ID from random generator. Feb 13 19:32:27.973145 zram_generator::config[1163]: No configuration found. Feb 13 19:32:27.973154 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:32:27.973162 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:32:27.973172 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:32:27.973185 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:32:27.973195 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:32:27.973204 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:32:27.973213 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:32:27.973224 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:32:27.973233 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:32:27.973242 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:32:27.973250 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:32:27.973261 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:32:27.973270 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:32:27.973279 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:32:27.973288 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:32:27.973297 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:32:27.973306 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:32:27.973314 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:32:27.973323 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:32:27.973334 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:32:27.973343 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:32:27.973352 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:32:27.973363 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:32:27.973372 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:32:27.973382 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:32:27.973391 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:32:27.973400 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:32:27.973410 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:32:27.973421 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:32:27.973430 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:32:27.973438 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:32:27.973447 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:32:27.973456 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:32:27.973469 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:32:27.973478 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:32:27.973487 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:32:27.973496 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:32:27.973505 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:32:27.973514 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:32:27.973523 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:32:27.973534 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:32:27.973543 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:32:27.973552 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:32:27.973562 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:32:27.973571 systemd[1]: Reached target machines.target - Containers. Feb 13 19:32:27.973580 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:32:27.973589 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:32:27.973598 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:32:27.973609 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:32:27.973619 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:32:27.973629 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:32:27.973638 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:32:27.973647 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:32:27.973656 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:32:27.973665 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:32:27.973674 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:32:27.973685 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:32:27.976363 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:32:27.976397 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:32:27.976408 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:32:27.976418 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:32:27.976427 kernel: fuse: init (API version 7.39) Feb 13 19:32:27.976470 systemd-journald[1243]: Collecting audit messages is disabled. Feb 13 19:32:27.976497 kernel: loop: module loaded Feb 13 19:32:27.976508 systemd-journald[1243]: Journal started Feb 13 19:32:27.976528 systemd-journald[1243]: Runtime Journal (/run/log/journal/edbd69acd2c347d3969f0d22e5f2715d) is 8M, max 78.5M, 70.5M free. Feb 13 19:32:26.922024 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:32:26.936487 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 19:32:26.936880 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:32:26.937200 systemd[1]: systemd-journald.service: Consumed 3.439s CPU time. Feb 13 19:32:27.992103 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:32:28.010943 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:32:28.031184 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:32:28.045371 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:32:28.062306 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:32:28.067721 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:32:28.067775 systemd[1]: Stopped verity-setup.service. Feb 13 19:32:28.090427 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:32:28.091289 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:32:28.096963 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:32:28.103560 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:32:28.109078 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:32:28.115105 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:32:28.121429 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:32:28.126626 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:32:28.133940 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:32:28.134106 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:32:28.141455 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:32:28.141614 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:32:28.147978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:32:28.148126 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:32:28.154930 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:32:28.155079 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:32:28.160955 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:32:28.162730 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:32:28.177810 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:32:28.186864 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:32:28.193061 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:32:28.195981 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:32:28.203510 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:32:28.210212 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:32:28.218853 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:32:28.226422 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:32:28.236270 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:32:28.242556 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:32:28.242658 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:32:28.250014 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:32:28.263863 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:32:28.271342 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:32:28.277065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:32:28.318737 kernel: ACPI: bus type drm_connector registered Feb 13 19:32:28.352915 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:32:28.362896 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:32:28.372039 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:32:28.376617 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Feb 13 19:32:28.376636 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Feb 13 19:32:28.383021 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:32:28.400853 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:32:28.410721 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:32:28.417371 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:32:28.417553 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:32:28.425159 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:32:28.431717 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:32:28.439935 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:32:28.446922 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:32:28.454886 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:32:28.470845 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:32:28.479474 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:32:28.487946 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:32:28.504985 udevadm[1312]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:32:28.699212 systemd-journald[1243]: Time spent on flushing to /var/log/journal/edbd69acd2c347d3969f0d22e5f2715d is 13.887ms for 926 entries. Feb 13 19:32:28.699212 systemd-journald[1243]: System Journal (/var/log/journal/edbd69acd2c347d3969f0d22e5f2715d) is 8M, max 2.6G, 2.6G free. Feb 13 19:32:28.776538 systemd-journald[1243]: Received client request to flush runtime journal. Feb 13 19:32:28.776571 kernel: loop0: detected capacity change from 0 to 123192 Feb 13 19:32:28.745328 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:32:28.758918 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:32:28.770955 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:32:28.778809 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:32:28.800259 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:32:29.655426 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:32:29.658725 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:32:29.842600 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:32:29.856867 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:32:29.876287 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Feb 13 19:32:29.876306 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Feb 13 19:32:29.880453 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:32:30.619720 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:32:30.659716 kernel: loop1: detected capacity change from 0 to 113512 Feb 13 19:32:32.296725 kernel: loop2: detected capacity change from 0 to 28720 Feb 13 19:32:32.682993 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:32:32.694863 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:32:32.721397 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Feb 13 19:32:33.283730 kernel: loop3: detected capacity change from 0 to 189592 Feb 13 19:32:33.323718 kernel: loop4: detected capacity change from 0 to 123192 Feb 13 19:32:33.335713 kernel: loop5: detected capacity change from 0 to 113512 Feb 13 19:32:33.346709 kernel: loop6: detected capacity change from 0 to 28720 Feb 13 19:32:33.356710 kernel: loop7: detected capacity change from 0 to 189592 Feb 13 19:32:33.364386 (sd-merge)[1333]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Feb 13 19:32:33.364851 (sd-merge)[1333]: Merged extensions into '/usr'. Feb 13 19:32:33.368383 systemd[1]: Reload requested from client PID 1301 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:32:33.368499 systemd[1]: Reloading... Feb 13 19:32:33.441721 zram_generator::config[1360]: No configuration found. Feb 13 19:32:33.633863 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:32:33.703540 systemd[1]: Reloading finished in 334 ms. Feb 13 19:32:33.721608 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:32:33.744623 systemd[1]: Starting ensure-sysext.service... Feb 13 19:32:33.753648 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:32:33.770846 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:32:33.784612 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:32:33.784846 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:32:33.785480 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:32:33.785708 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Feb 13 19:32:33.785760 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Feb 13 19:32:33.792202 systemd-tmpfiles[1427]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:32:33.792220 systemd-tmpfiles[1427]: Skipping /boot Feb 13 19:32:33.796533 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:32:33.807389 systemd-tmpfiles[1427]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:32:33.807883 systemd-tmpfiles[1427]: Skipping /boot Feb 13 19:32:33.813941 systemd[1]: Reload requested from client PID 1422 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:32:33.813957 systemd[1]: Reloading... Feb 13 19:32:33.941724 zram_generator::config[1470]: No configuration found. Feb 13 19:32:34.076371 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:32:34.153986 kernel: hv_vmbus: registering driver hv_balloon Feb 13 19:32:34.154084 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 13 19:32:34.158924 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 13 19:32:34.163141 kernel: hv_vmbus: registering driver hyperv_fb Feb 13 19:32:34.159827 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:32:34.160184 systemd[1]: Reloading finished in 345 ms. Feb 13 19:32:34.167831 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:32:34.167913 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 13 19:32:34.180381 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 13 19:32:34.188148 kernel: Console: switching to colour dummy device 80x25 Feb 13 19:32:34.197028 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 19:32:34.207743 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:32:34.242552 systemd[1]: Finished ensure-sysext.service. Feb 13 19:32:34.260025 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:32:34.276773 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1425) Feb 13 19:32:34.412050 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:32:34.418897 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:32:34.420325 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:32:34.430112 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:32:34.440827 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:32:34.449771 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:32:34.456105 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:32:34.456389 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:32:34.457927 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:32:34.472495 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:32:34.478370 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:32:34.497097 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:32:34.506514 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:32:34.515025 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:32:34.533826 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:32:34.533993 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:32:34.544478 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:32:34.546252 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:32:34.556498 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:32:34.557785 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:32:34.567401 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:32:34.567571 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:32:34.593432 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:32:34.601237 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:32:34.610178 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:32:34.641313 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 19:32:34.653912 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:32:34.661886 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:32:34.668528 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:32:34.668605 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:32:34.849533 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:32:34.849844 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:34.858882 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:32:34.864987 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:32:34.896338 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:32:35.051995 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:32:35.179324 systemd-networkd[1436]: lo: Link UP Feb 13 19:32:35.179332 systemd-networkd[1436]: lo: Gained carrier Feb 13 19:32:35.181801 systemd-networkd[1436]: Enumeration completed Feb 13 19:32:35.181974 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:32:35.183025 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:32:35.183029 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:32:35.195270 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:32:35.196948 lvm[1636]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:32:35.203619 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:32:35.227327 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:32:35.234986 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:32:35.245846 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:32:35.257710 kernel: mlx5_core edae:00:02.0 enP60846s1: Link up Feb 13 19:32:35.261666 lvm[1659]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:32:35.287562 systemd-networkd[1436]: enP60846s1: Link UP Feb 13 19:32:35.287762 kernel: hv_netvsc 002248b9-996e-0022-48b9-996e002248b9 eth0: Data path switched to VF: enP60846s1 Feb 13 19:32:35.288105 systemd-networkd[1436]: eth0: Link UP Feb 13 19:32:35.288182 systemd-networkd[1436]: eth0: Gained carrier Feb 13 19:32:35.288265 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:32:35.297073 systemd-networkd[1436]: enP60846s1: Gained carrier Feb 13 19:32:35.297680 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:32:35.309738 systemd-networkd[1436]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 19:32:35.547281 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:32:35.609514 augenrules[1665]: No rules Feb 13 19:32:35.610022 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:32:35.610249 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:32:35.643466 systemd-resolved[1612]: Positive Trust Anchors: Feb 13 19:32:35.643816 systemd-resolved[1612]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:32:35.643851 systemd-resolved[1612]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:32:35.844618 systemd-resolved[1612]: Using system hostname 'ci-4230.0.1-a-2ba2208742'. Feb 13 19:32:35.846279 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:32:35.852797 systemd[1]: Reached target network.target - Network. Feb 13 19:32:35.857791 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:32:35.957383 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:36.991791 systemd-networkd[1436]: enP60846s1: Gained IPv6LL Feb 13 19:32:36.992351 systemd-networkd[1436]: eth0: Gained IPv6LL Feb 13 19:32:36.995430 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:32:37.003461 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:32:37.249184 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:32:37.258119 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:32:41.341245 ldconfig[1290]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:32:41.356512 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:32:41.368877 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:32:41.383748 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:32:41.391086 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:32:41.397867 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:32:41.405378 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:32:41.413783 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:32:41.420318 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:32:41.428458 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:32:41.436349 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:32:41.436384 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:32:41.442092 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:32:41.449768 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:32:41.458478 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:32:41.466897 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:32:41.476385 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:32:41.484835 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:32:41.499374 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:32:41.506385 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:32:41.514614 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:32:41.521400 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:32:41.527781 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:32:41.533594 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:32:41.533621 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:32:41.546792 systemd[1]: Starting chronyd.service - NTP client/server... Feb 13 19:32:41.556825 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:32:41.580838 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:32:41.589018 (chronyd)[1681]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Feb 13 19:32:41.589921 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:32:41.597113 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:32:41.605659 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:32:41.607199 jq[1688]: false Feb 13 19:32:41.612337 chronyd[1691]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Feb 13 19:32:41.615989 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:32:41.616037 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Feb 13 19:32:41.617903 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Feb 13 19:32:41.626007 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Feb 13 19:32:41.629855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:32:41.640491 KVP[1692]: KVP starting; pid is:1692 Feb 13 19:32:41.640961 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:32:41.644558 KVP[1692]: KVP LIC Version: 3.1 Feb 13 19:32:41.646261 kernel: hv_utils: KVP IC version 4.0 Feb 13 19:32:41.658895 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:32:41.665556 chronyd[1691]: Timezone right/UTC failed leap second check, ignoring Feb 13 19:32:41.665775 chronyd[1691]: Loaded seccomp filter (level 2) Feb 13 19:32:41.668113 extend-filesystems[1689]: Found loop4 Feb 13 19:32:41.673507 extend-filesystems[1689]: Found loop5 Feb 13 19:32:41.673507 extend-filesystems[1689]: Found loop6 Feb 13 19:32:41.673507 extend-filesystems[1689]: Found loop7 Feb 13 19:32:41.673507 extend-filesystems[1689]: Found sda Feb 13 19:32:41.673507 extend-filesystems[1689]: Found sda1 Feb 13 19:32:41.673507 extend-filesystems[1689]: Found sda2 Feb 13 19:32:41.673507 extend-filesystems[1689]: Found sda3 Feb 13 19:32:41.673507 extend-filesystems[1689]: Found usr Feb 13 19:32:41.673507 extend-filesystems[1689]: Found sda4 Feb 13 19:32:41.673507 extend-filesystems[1689]: Found sda6 Feb 13 19:32:41.673507 extend-filesystems[1689]: Found sda7 Feb 13 19:32:41.673507 extend-filesystems[1689]: Found sda9 Feb 13 19:32:41.673507 extend-filesystems[1689]: Checking size of /dev/sda9 Feb 13 19:32:41.672008 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:32:41.880216 extend-filesystems[1689]: Old size kept for /dev/sda9 Feb 13 19:32:41.880216 extend-filesystems[1689]: Found sr0 Feb 13 19:32:41.941189 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1735) Feb 13 19:32:41.798784 dbus-daemon[1687]: [system] SELinux support is enabled Feb 13 19:32:41.690654 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:32:41.973428 coreos-metadata[1683]: Feb 13 19:32:41.912 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 19:32:41.973428 coreos-metadata[1683]: Feb 13 19:32:41.923 INFO Fetch successful Feb 13 19:32:41.973428 coreos-metadata[1683]: Feb 13 19:32:41.923 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 13 19:32:41.973428 coreos-metadata[1683]: Feb 13 19:32:41.932 INFO Fetch successful Feb 13 19:32:41.973428 coreos-metadata[1683]: Feb 13 19:32:41.933 INFO Fetching http://168.63.129.16/machine/bb8cd1d4-743b-4fc5-8d05-bc0cff1a3362/608608aa%2D7f72%2D4e49%2Da429%2D427a1fec3f73.%5Fci%2D4230.0.1%2Da%2D2ba2208742?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 13 19:32:41.973428 coreos-metadata[1683]: Feb 13 19:32:41.935 INFO Fetch successful Feb 13 19:32:41.973428 coreos-metadata[1683]: Feb 13 19:32:41.935 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 13 19:32:41.973428 coreos-metadata[1683]: Feb 13 19:32:41.950 INFO Fetch successful Feb 13 19:32:41.711876 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:32:41.725943 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:32:41.742427 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:32:41.743044 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:32:41.975535 update_engine[1713]: I20250213 19:32:41.815989 1713 main.cc:92] Flatcar Update Engine starting Feb 13 19:32:41.975535 update_engine[1713]: I20250213 19:32:41.829833 1713 update_check_scheduler.cc:74] Next update check in 4m50s Feb 13 19:32:41.752171 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:32:41.987214 jq[1719]: true Feb 13 19:32:41.770834 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:32:41.781071 systemd[1]: Started chronyd.service - NTP client/server. Feb 13 19:32:41.800981 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:32:41.841842 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:32:41.842037 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:32:41.842331 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:32:41.842489 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:32:41.876047 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:32:41.876227 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:32:41.907057 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:32:41.935047 systemd-logind[1711]: New seat seat0. Feb 13 19:32:41.938577 systemd-logind[1711]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:32:41.962224 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:32:41.974659 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:32:41.975771 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:32:42.014862 jq[1772]: true Feb 13 19:32:42.016097 (ntainerd)[1773]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:32:42.033907 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:32:42.049125 dbus-daemon[1687]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:32:42.065784 tar[1750]: linux-arm64/helm Feb 13 19:32:42.066323 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:32:42.080469 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:32:42.080836 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:32:42.081674 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:32:42.094533 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:32:42.094796 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:32:42.119958 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:32:42.219451 bash[1825]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:32:42.222466 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:32:42.233559 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:32:42.363541 locksmithd[1818]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:32:42.472848 sshd_keygen[1720]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:32:42.508202 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:32:42.524933 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:32:42.545288 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Feb 13 19:32:42.575593 containerd[1773]: time="2025-02-13T19:32:42.575489540Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:32:42.576080 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:32:42.576305 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:32:42.596017 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:32:42.621070 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:32:42.644043 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:32:42.664027 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:32:42.673587 containerd[1773]: time="2025-02-13T19:32:42.673539060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:42.675897 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:32:42.682753 containerd[1773]: time="2025-02-13T19:32:42.682681340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:32:42.682753 containerd[1773]: time="2025-02-13T19:32:42.682748180Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:32:42.682872 containerd[1773]: time="2025-02-13T19:32:42.682767140Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:32:42.682960 containerd[1773]: time="2025-02-13T19:32:42.682936020Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:32:42.682960 containerd[1773]: time="2025-02-13T19:32:42.682958540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:42.683047 containerd[1773]: time="2025-02-13T19:32:42.683025180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:32:42.683047 containerd[1773]: time="2025-02-13T19:32:42.683043300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:42.683276 containerd[1773]: time="2025-02-13T19:32:42.683252460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:32:42.683276 containerd[1773]: time="2025-02-13T19:32:42.683272660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:42.683327 containerd[1773]: time="2025-02-13T19:32:42.683286540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:32:42.683327 containerd[1773]: time="2025-02-13T19:32:42.683296820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:42.683405 containerd[1773]: time="2025-02-13T19:32:42.683381220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:42.683610 containerd[1773]: time="2025-02-13T19:32:42.683587020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:42.683754 containerd[1773]: time="2025-02-13T19:32:42.683731820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:32:42.683754 containerd[1773]: time="2025-02-13T19:32:42.683752140Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:32:42.683854 containerd[1773]: time="2025-02-13T19:32:42.683835420Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:32:42.683897 containerd[1773]: time="2025-02-13T19:32:42.683880580Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:32:42.693265 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Feb 13 19:32:42.706360 containerd[1773]: time="2025-02-13T19:32:42.706314620Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:32:42.707113 containerd[1773]: time="2025-02-13T19:32:42.707084620Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:32:42.707204 containerd[1773]: time="2025-02-13T19:32:42.707189500Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:32:42.707309 containerd[1773]: time="2025-02-13T19:32:42.707279420Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:32:42.707361 containerd[1773]: time="2025-02-13T19:32:42.707316700Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:32:42.708588 containerd[1773]: time="2025-02-13T19:32:42.707535460Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:32:42.708989 containerd[1773]: time="2025-02-13T19:32:42.708877180Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:32:42.709052 containerd[1773]: time="2025-02-13T19:32:42.709029900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:32:42.709052 containerd[1773]: time="2025-02-13T19:32:42.709046020Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:32:42.709086 containerd[1773]: time="2025-02-13T19:32:42.709059540Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:32:42.709086 containerd[1773]: time="2025-02-13T19:32:42.709072900Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:32:42.709127 containerd[1773]: time="2025-02-13T19:32:42.709090340Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:32:42.709127 containerd[1773]: time="2025-02-13T19:32:42.709104620Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:32:42.709127 containerd[1773]: time="2025-02-13T19:32:42.709118500Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:32:42.709173 containerd[1773]: time="2025-02-13T19:32:42.709133020Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:32:42.709173 containerd[1773]: time="2025-02-13T19:32:42.709147220Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:32:42.709173 containerd[1773]: time="2025-02-13T19:32:42.709159220Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:32:42.709173 containerd[1773]: time="2025-02-13T19:32:42.709170780Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:32:42.709235 containerd[1773]: time="2025-02-13T19:32:42.709190460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:32:42.709235 containerd[1773]: time="2025-02-13T19:32:42.709203900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:32:42.709235 containerd[1773]: time="2025-02-13T19:32:42.709215740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:32:42.709235 containerd[1773]: time="2025-02-13T19:32:42.709228340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:32:42.709303 containerd[1773]: time="2025-02-13T19:32:42.709239100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:32:42.709303 containerd[1773]: time="2025-02-13T19:32:42.709251660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:32:42.709303 containerd[1773]: time="2025-02-13T19:32:42.709263060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:32:42.709303 containerd[1773]: time="2025-02-13T19:32:42.709275540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:32:42.709303 containerd[1773]: time="2025-02-13T19:32:42.709287820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:32:42.709388 containerd[1773]: time="2025-02-13T19:32:42.709319940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:32:42.709388 containerd[1773]: time="2025-02-13T19:32:42.709331140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:32:42.709388 containerd[1773]: time="2025-02-13T19:32:42.709342940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:32:42.709388 containerd[1773]: time="2025-02-13T19:32:42.709354500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:32:42.709388 containerd[1773]: time="2025-02-13T19:32:42.709368700Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:32:42.709472 containerd[1773]: time="2025-02-13T19:32:42.709388140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:32:42.709472 containerd[1773]: time="2025-02-13T19:32:42.709400420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:32:42.709472 containerd[1773]: time="2025-02-13T19:32:42.709411060Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:32:42.709472 containerd[1773]: time="2025-02-13T19:32:42.709460460Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:32:42.709543 containerd[1773]: time="2025-02-13T19:32:42.709477980Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:32:42.709543 containerd[1773]: time="2025-02-13T19:32:42.709488540Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:32:42.709543 containerd[1773]: time="2025-02-13T19:32:42.709500020Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:32:42.709543 containerd[1773]: time="2025-02-13T19:32:42.709511500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:32:42.709543 containerd[1773]: time="2025-02-13T19:32:42.709523380Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:32:42.709543 containerd[1773]: time="2025-02-13T19:32:42.709533420Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:32:42.709639 containerd[1773]: time="2025-02-13T19:32:42.709545060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:32:42.711185 containerd[1773]: time="2025-02-13T19:32:42.710288460Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:32:42.711185 containerd[1773]: time="2025-02-13T19:32:42.710364300Z" level=info msg="Connect containerd service" Feb 13 19:32:42.711185 containerd[1773]: time="2025-02-13T19:32:42.710407860Z" level=info msg="using legacy CRI server" Feb 13 19:32:42.711185 containerd[1773]: time="2025-02-13T19:32:42.710415060Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:32:42.711185 containerd[1773]: time="2025-02-13T19:32:42.710572220Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:32:42.712859 containerd[1773]: time="2025-02-13T19:32:42.712796300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:32:42.712859 containerd[1773]: time="2025-02-13T19:32:42.712960260Z" level=info msg="Start subscribing containerd event" Feb 13 19:32:42.713494 containerd[1773]: time="2025-02-13T19:32:42.713165020Z" level=info msg="Start recovering state" Feb 13 19:32:42.713576 containerd[1773]: time="2025-02-13T19:32:42.713560220Z" level=info msg="Start event monitor" Feb 13 19:32:42.714216 containerd[1773]: time="2025-02-13T19:32:42.713614900Z" level=info msg="Start snapshots syncer" Feb 13 19:32:42.714216 containerd[1773]: time="2025-02-13T19:32:42.713629540Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:32:42.714216 containerd[1773]: time="2025-02-13T19:32:42.713637260Z" level=info msg="Start streaming server" Feb 13 19:32:42.714565 containerd[1773]: time="2025-02-13T19:32:42.714538700Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:32:42.715675 containerd[1773]: time="2025-02-13T19:32:42.715020460Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:32:42.715675 containerd[1773]: time="2025-02-13T19:32:42.715091860Z" level=info msg="containerd successfully booted in 0.146261s" Feb 13 19:32:42.715189 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:32:42.738010 tar[1750]: linux-arm64/LICENSE Feb 13 19:32:42.738094 tar[1750]: linux-arm64/README.md Feb 13 19:32:42.751975 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:32:42.908395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:32:42.917122 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:32:42.927783 systemd[1]: Startup finished in 734ms (kernel) + 12.073s (initrd) + 20.615s (userspace) = 33.423s. Feb 13 19:32:42.930273 (kubelet)[1873]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:32:43.179466 login[1859]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:43.181018 login[1861]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:43.192939 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:32:43.203050 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:32:43.207791 systemd-logind[1711]: New session 2 of user core. Feb 13 19:32:43.214330 systemd-logind[1711]: New session 1 of user core. Feb 13 19:32:43.221512 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:32:43.228034 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:32:43.232067 (systemd)[1884]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:32:43.235441 systemd-logind[1711]: New session c1 of user core. Feb 13 19:32:43.388907 systemd[1884]: Queued start job for default target default.target. Feb 13 19:32:43.394769 systemd[1884]: Created slice app.slice - User Application Slice. Feb 13 19:32:43.394803 systemd[1884]: Reached target paths.target - Paths. Feb 13 19:32:43.394843 systemd[1884]: Reached target timers.target - Timers. Feb 13 19:32:43.397869 systemd[1884]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:32:43.402043 kubelet[1873]: E0213 19:32:43.401999 1873 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:32:43.404837 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:32:43.404960 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:32:43.405243 systemd[1]: kubelet.service: Consumed 672ms CPU time, 234.2M memory peak. Feb 13 19:32:43.407189 systemd[1884]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:32:43.407244 systemd[1884]: Reached target sockets.target - Sockets. Feb 13 19:32:43.407284 systemd[1884]: Reached target basic.target - Basic System. Feb 13 19:32:43.407312 systemd[1884]: Reached target default.target - Main User Target. Feb 13 19:32:43.407337 systemd[1884]: Startup finished in 165ms. Feb 13 19:32:43.407455 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:32:43.417888 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:32:43.419436 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:32:44.495227 waagent[1863]: 2025-02-13T19:32:44.495136Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Feb 13 19:32:44.501537 waagent[1863]: 2025-02-13T19:32:44.501476Z INFO Daemon Daemon OS: flatcar 4230.0.1 Feb 13 19:32:44.506284 waagent[1863]: 2025-02-13T19:32:44.506236Z INFO Daemon Daemon Python: 3.11.11 Feb 13 19:32:44.511158 waagent[1863]: 2025-02-13T19:32:44.511105Z INFO Daemon Daemon Run daemon Feb 13 19:32:44.516136 waagent[1863]: 2025-02-13T19:32:44.516082Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.0.1' Feb 13 19:32:44.525636 waagent[1863]: 2025-02-13T19:32:44.525586Z INFO Daemon Daemon Using waagent for provisioning Feb 13 19:32:44.531405 waagent[1863]: 2025-02-13T19:32:44.531362Z INFO Daemon Daemon Activate resource disk Feb 13 19:32:44.536390 waagent[1863]: 2025-02-13T19:32:44.536350Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 13 19:32:44.549707 waagent[1863]: 2025-02-13T19:32:44.549648Z INFO Daemon Daemon Found device: None Feb 13 19:32:44.554514 waagent[1863]: 2025-02-13T19:32:44.554472Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 13 19:32:44.564433 waagent[1863]: 2025-02-13T19:32:44.564383Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 13 19:32:44.577640 waagent[1863]: 2025-02-13T19:32:44.577591Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 19:32:44.584095 waagent[1863]: 2025-02-13T19:32:44.584051Z INFO Daemon Daemon Running default provisioning handler Feb 13 19:32:44.596154 waagent[1863]: 2025-02-13T19:32:44.596085Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Feb 13 19:32:44.611824 waagent[1863]: 2025-02-13T19:32:44.611764Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 13 19:32:44.622221 waagent[1863]: 2025-02-13T19:32:44.622169Z INFO Daemon Daemon cloud-init is enabled: False Feb 13 19:32:44.627377 waagent[1863]: 2025-02-13T19:32:44.627330Z INFO Daemon Daemon Copying ovf-env.xml Feb 13 19:32:44.679079 waagent[1863]: 2025-02-13T19:32:44.678369Z INFO Daemon Daemon Successfully mounted dvd Feb 13 19:32:44.707994 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 13 19:32:44.710226 waagent[1863]: 2025-02-13T19:32:44.710119Z INFO Daemon Daemon Detect protocol endpoint Feb 13 19:32:44.715798 waagent[1863]: 2025-02-13T19:32:44.715741Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 19:32:44.722063 waagent[1863]: 2025-02-13T19:32:44.722011Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 13 19:32:44.729462 waagent[1863]: 2025-02-13T19:32:44.729407Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 13 19:32:44.740750 waagent[1863]: 2025-02-13T19:32:44.735601Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 13 19:32:44.741017 waagent[1863]: 2025-02-13T19:32:44.740965Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 13 19:32:44.770934 waagent[1863]: 2025-02-13T19:32:44.770850Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 13 19:32:44.778563 waagent[1863]: 2025-02-13T19:32:44.778532Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 13 19:32:44.784219 waagent[1863]: 2025-02-13T19:32:44.784177Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 13 19:32:45.038107 waagent[1863]: 2025-02-13T19:32:45.037953Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 13 19:32:45.045986 waagent[1863]: 2025-02-13T19:32:45.045915Z INFO Daemon Daemon Forcing an update of the goal state. Feb 13 19:32:45.055913 waagent[1863]: 2025-02-13T19:32:45.055858Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 19:32:45.114135 waagent[1863]: 2025-02-13T19:32:45.114085Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Feb 13 19:32:45.120673 waagent[1863]: 2025-02-13T19:32:45.120624Z INFO Daemon Feb 13 19:32:45.124005 waagent[1863]: 2025-02-13T19:32:45.123956Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 8dca0e7f-ea1e-4243-817a-e08f24c9cb3c eTag: 15570550641084362983 source: Fabric] Feb 13 19:32:45.137452 waagent[1863]: 2025-02-13T19:32:45.137399Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Feb 13 19:32:45.145587 waagent[1863]: 2025-02-13T19:32:45.145534Z INFO Daemon Feb 13 19:32:45.148905 waagent[1863]: 2025-02-13T19:32:45.148855Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Feb 13 19:32:45.161616 waagent[1863]: 2025-02-13T19:32:45.161573Z INFO Daemon Daemon Downloading artifacts profile blob Feb 13 19:32:45.259746 waagent[1863]: 2025-02-13T19:32:45.259009Z INFO Daemon Downloaded certificate {'thumbprint': '8025A3E6508BC2048577CF2E806578DD05F3E41A', 'hasPrivateKey': False} Feb 13 19:32:45.270962 waagent[1863]: 2025-02-13T19:32:45.270904Z INFO Daemon Downloaded certificate {'thumbprint': '60E079A31B1CB046EF7D9C4020A87EA99740F12F', 'hasPrivateKey': True} Feb 13 19:32:45.283375 waagent[1863]: 2025-02-13T19:32:45.283320Z INFO Daemon Fetch goal state completed Feb 13 19:32:45.300134 waagent[1863]: 2025-02-13T19:32:45.300049Z INFO Daemon Daemon Starting provisioning Feb 13 19:32:45.306312 waagent[1863]: 2025-02-13T19:32:45.306248Z INFO Daemon Daemon Handle ovf-env.xml. Feb 13 19:32:45.312589 waagent[1863]: 2025-02-13T19:32:45.312529Z INFO Daemon Daemon Set hostname [ci-4230.0.1-a-2ba2208742] Feb 13 19:32:45.335750 waagent[1863]: 2025-02-13T19:32:45.335653Z INFO Daemon Daemon Publish hostname [ci-4230.0.1-a-2ba2208742] Feb 13 19:32:45.343821 waagent[1863]: 2025-02-13T19:32:45.343759Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 13 19:32:45.351234 waagent[1863]: 2025-02-13T19:32:45.351179Z INFO Daemon Daemon Primary interface is [eth0] Feb 13 19:32:45.364139 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:32:45.364737 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:32:45.364795 systemd-networkd[1436]: eth0: DHCP lease lost Feb 13 19:32:45.365125 waagent[1863]: 2025-02-13T19:32:45.365051Z INFO Daemon Daemon Create user account if not exists Feb 13 19:32:45.372858 waagent[1863]: 2025-02-13T19:32:45.372796Z INFO Daemon Daemon User core already exists, skip useradd Feb 13 19:32:45.379683 waagent[1863]: 2025-02-13T19:32:45.379623Z INFO Daemon Daemon Configure sudoer Feb 13 19:32:45.385066 waagent[1863]: 2025-02-13T19:32:45.385001Z INFO Daemon Daemon Configure sshd Feb 13 19:32:45.390515 waagent[1863]: 2025-02-13T19:32:45.390459Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Feb 13 19:32:45.405622 waagent[1863]: 2025-02-13T19:32:45.405559Z INFO Daemon Daemon Deploy ssh public key. Feb 13 19:32:45.416752 systemd-networkd[1436]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 19:32:46.534280 waagent[1863]: 2025-02-13T19:32:46.534230Z INFO Daemon Daemon Provisioning complete Feb 13 19:32:46.555493 waagent[1863]: 2025-02-13T19:32:46.555445Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 13 19:32:46.562812 waagent[1863]: 2025-02-13T19:32:46.562754Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 13 19:32:46.573589 waagent[1863]: 2025-02-13T19:32:46.573535Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Feb 13 19:32:46.704611 waagent[1942]: 2025-02-13T19:32:46.704103Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 13 19:32:46.704611 waagent[1942]: 2025-02-13T19:32:46.704251Z INFO ExtHandler ExtHandler OS: flatcar 4230.0.1 Feb 13 19:32:46.704611 waagent[1942]: 2025-02-13T19:32:46.704301Z INFO ExtHandler ExtHandler Python: 3.11.11 Feb 13 19:32:46.725771 waagent[1942]: 2025-02-13T19:32:46.725670Z INFO ExtHandler ExtHandler Distro: flatcar-4230.0.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 13 19:32:46.726080 waagent[1942]: 2025-02-13T19:32:46.726043Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 19:32:46.726224 waagent[1942]: 2025-02-13T19:32:46.726191Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 19:32:46.737469 waagent[1942]: 2025-02-13T19:32:46.737392Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 19:32:46.744084 waagent[1942]: 2025-02-13T19:32:46.744038Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Feb 13 19:32:46.745720 waagent[1942]: 2025-02-13T19:32:46.744675Z INFO ExtHandler Feb 13 19:32:46.745720 waagent[1942]: 2025-02-13T19:32:46.744774Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ff40716f-16a6-4f66-9c43-546661100a72 eTag: 15570550641084362983 source: Fabric] Feb 13 19:32:46.745720 waagent[1942]: 2025-02-13T19:32:46.745045Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 13 19:32:46.745720 waagent[1942]: 2025-02-13T19:32:46.745605Z INFO ExtHandler Feb 13 19:32:46.745720 waagent[1942]: 2025-02-13T19:32:46.745668Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 13 19:32:46.750063 waagent[1942]: 2025-02-13T19:32:46.750031Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 13 19:32:46.828770 waagent[1942]: 2025-02-13T19:32:46.828629Z INFO ExtHandler Downloaded certificate {'thumbprint': '8025A3E6508BC2048577CF2E806578DD05F3E41A', 'hasPrivateKey': False} Feb 13 19:32:46.829323 waagent[1942]: 2025-02-13T19:32:46.829284Z INFO ExtHandler Downloaded certificate {'thumbprint': '60E079A31B1CB046EF7D9C4020A87EA99740F12F', 'hasPrivateKey': True} Feb 13 19:32:46.829898 waagent[1942]: 2025-02-13T19:32:46.829844Z INFO ExtHandler Fetch goal state completed Feb 13 19:32:46.847342 waagent[1942]: 2025-02-13T19:32:46.847281Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1942 Feb 13 19:32:46.847628 waagent[1942]: 2025-02-13T19:32:46.847594Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Feb 13 19:32:46.849475 waagent[1942]: 2025-02-13T19:32:46.849422Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.0.1', '', 'Flatcar Container Linux by Kinvolk'] Feb 13 19:32:46.849976 waagent[1942]: 2025-02-13T19:32:46.849937Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 13 19:32:50.052378 waagent[1942]: 2025-02-13T19:32:50.052326Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 13 19:32:50.052733 waagent[1942]: 2025-02-13T19:32:50.052547Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 13 19:32:50.058786 waagent[1942]: 2025-02-13T19:32:50.058740Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 13 19:32:50.065065 systemd[1]: Reload requested from client PID 1960 ('systemctl') (unit waagent.service)... Feb 13 19:32:50.065306 systemd[1]: Reloading... Feb 13 19:32:50.160720 zram_generator::config[1999]: No configuration found. Feb 13 19:32:50.262326 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:32:50.358295 systemd[1]: Reloading finished in 292 ms. Feb 13 19:32:50.375637 waagent[1942]: 2025-02-13T19:32:50.373838Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Feb 13 19:32:50.381716 systemd[1]: Reload requested from client PID 2054 ('systemctl') (unit waagent.service)... Feb 13 19:32:50.381733 systemd[1]: Reloading... Feb 13 19:32:50.468751 zram_generator::config[2096]: No configuration found. Feb 13 19:32:50.565422 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:32:50.661911 systemd[1]: Reloading finished in 279 ms. Feb 13 19:32:50.673644 waagent[1942]: 2025-02-13T19:32:50.672851Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Feb 13 19:32:50.673644 waagent[1942]: 2025-02-13T19:32:50.673022Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Feb 13 19:32:53.625290 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:32:53.634868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:01.397437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:01.401486 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:33:01.698335 kubelet[2159]: E0213 19:33:01.698219 2159 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:33:01.701182 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:33:01.701324 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:33:01.701737 systemd[1]: kubelet.service: Consumed 347ms CPU time, 94.6M memory peak. Feb 13 19:33:02.498585 waagent[1942]: 2025-02-13T19:33:02.498499Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 13 19:33:02.499240 waagent[1942]: 2025-02-13T19:33:02.499165Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 13 19:33:02.500068 waagent[1942]: 2025-02-13T19:33:02.499980Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 13 19:33:02.500474 waagent[1942]: 2025-02-13T19:33:02.500379Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 13 19:33:02.501051 waagent[1942]: 2025-02-13T19:33:02.500935Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 13 19:33:02.501216 waagent[1942]: 2025-02-13T19:33:02.501054Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 13 19:33:02.501654 waagent[1942]: 2025-02-13T19:33:02.501550Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 13 19:33:02.501734 waagent[1942]: 2025-02-13T19:33:02.501652Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 13 19:33:02.502177 waagent[1942]: 2025-02-13T19:33:02.502033Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 19:33:02.502622 waagent[1942]: 2025-02-13T19:33:02.502461Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 19:33:02.502622 waagent[1942]: 2025-02-13T19:33:02.502569Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 13 19:33:02.503467 waagent[1942]: 2025-02-13T19:33:02.502839Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 19:33:02.503467 waagent[1942]: 2025-02-13T19:33:02.503074Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 13 19:33:02.503467 waagent[1942]: 2025-02-13T19:33:02.503247Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 13 19:33:02.503467 waagent[1942]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 13 19:33:02.503467 waagent[1942]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 13 19:33:02.503467 waagent[1942]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 13 19:33:02.503467 waagent[1942]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 13 19:33:02.503467 waagent[1942]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 19:33:02.503467 waagent[1942]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 19:33:02.504033 waagent[1942]: 2025-02-13T19:33:02.503972Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 19:33:02.504214 waagent[1942]: 2025-02-13T19:33:02.504165Z INFO EnvHandler ExtHandler Configure routes Feb 13 19:33:02.504277 waagent[1942]: 2025-02-13T19:33:02.504244Z INFO EnvHandler ExtHandler Gateway:None Feb 13 19:33:02.504323 waagent[1942]: 2025-02-13T19:33:02.504297Z INFO EnvHandler ExtHandler Routes:None Feb 13 19:33:02.512903 waagent[1942]: 2025-02-13T19:33:02.512846Z INFO ExtHandler ExtHandler Feb 13 19:33:02.513016 waagent[1942]: 2025-02-13T19:33:02.512962Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: ad5e8e78-8edd-4d07-aa21-0d05205d123c correlation ff671495-d8c6-4ac4-af2c-a7efa09564cc created: 2025-02-13T19:31:22.228813Z] Feb 13 19:33:02.513385 waagent[1942]: 2025-02-13T19:33:02.513334Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 13 19:33:02.513978 waagent[1942]: 2025-02-13T19:33:02.513935Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Feb 13 19:33:02.553932 waagent[1942]: 2025-02-13T19:33:02.553797Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F9A96DE2-0CCA-444D-B9D4-F71D551907D6;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Feb 13 19:33:02.593138 waagent[1942]: 2025-02-13T19:33:02.593063Z INFO MonitorHandler ExtHandler Network interfaces: Feb 13 19:33:02.593138 waagent[1942]: Executing ['ip', '-a', '-o', 'link']: Feb 13 19:33:02.593138 waagent[1942]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 13 19:33:02.593138 waagent[1942]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:99:6e brd ff:ff:ff:ff:ff:ff Feb 13 19:33:02.593138 waagent[1942]: 3: enP60846s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:99:6e brd ff:ff:ff:ff:ff:ff\ altname enP60846p0s2 Feb 13 19:33:02.593138 waagent[1942]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 13 19:33:02.593138 waagent[1942]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 13 19:33:02.593138 waagent[1942]: 2: eth0 inet 10.200.20.38/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 13 19:33:02.593138 waagent[1942]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 13 19:33:02.593138 waagent[1942]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Feb 13 19:33:02.593138 waagent[1942]: 2: eth0 inet6 fe80::222:48ff:feb9:996e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 19:33:02.593138 waagent[1942]: 3: enP60846s1 inet6 fe80::222:48ff:feb9:996e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 19:33:02.828667 waagent[1942]: 2025-02-13T19:33:02.827713Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 13 19:33:02.828667 waagent[1942]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 19:33:02.828667 waagent[1942]: pkts bytes target prot opt in out source destination Feb 13 19:33:02.828667 waagent[1942]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 19:33:02.828667 waagent[1942]: pkts bytes target prot opt in out source destination Feb 13 19:33:02.828667 waagent[1942]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 19:33:02.828667 waagent[1942]: pkts bytes target prot opt in out source destination Feb 13 19:33:02.828667 waagent[1942]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 19:33:02.828667 waagent[1942]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 19:33:02.828667 waagent[1942]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 19:33:02.830827 waagent[1942]: 2025-02-13T19:33:02.830749Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 13 19:33:02.830827 waagent[1942]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 19:33:02.830827 waagent[1942]: pkts bytes target prot opt in out source destination Feb 13 19:33:02.830827 waagent[1942]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 19:33:02.830827 waagent[1942]: pkts bytes target prot opt in out source destination Feb 13 19:33:02.830827 waagent[1942]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 19:33:02.830827 waagent[1942]: pkts bytes target prot opt in out source destination Feb 13 19:33:02.830827 waagent[1942]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 19:33:02.830827 waagent[1942]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 19:33:02.830827 waagent[1942]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 19:33:02.831387 waagent[1942]: 2025-02-13T19:33:02.831357Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 13 19:33:05.459251 chronyd[1691]: Selected source PHC0 Feb 13 19:33:11.875303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:33:11.880880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:11.989090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:11.993193 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:33:12.058433 kubelet[2203]: E0213 19:33:12.058356 2203 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:33:12.061140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:33:12.061411 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:33:12.061816 systemd[1]: kubelet.service: Consumed 119ms CPU time, 94.9M memory peak. Feb 13 19:33:22.125331 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 19:33:22.133897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:22.227279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:22.238995 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:33:22.279477 kubelet[2219]: E0213 19:33:22.279382 2219 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:33:22.285978 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 13 19:33:22.285904 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:33:22.286038 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:33:22.286308 systemd[1]: kubelet.service: Consumed 119ms CPU time, 97M memory peak. Feb 13 19:33:27.548341 update_engine[1713]: I20250213 19:33:27.547807 1713 update_attempter.cc:509] Updating boot flags... Feb 13 19:33:27.600739 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2241) Feb 13 19:33:27.714135 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2246) Feb 13 19:33:30.818968 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:33:30.823954 systemd[1]: Started sshd@0-10.200.20.38:22-10.200.16.10:57146.service - OpenSSH per-connection server daemon (10.200.16.10:57146). Feb 13 19:33:31.506320 sshd[2341]: Accepted publickey for core from 10.200.16.10 port 57146 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:33:31.507661 sshd-session[2341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:31.511726 systemd-logind[1711]: New session 3 of user core. Feb 13 19:33:31.518885 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:33:31.913082 systemd[1]: Started sshd@1-10.200.20.38:22-10.200.16.10:57150.service - OpenSSH per-connection server daemon (10.200.16.10:57150). Feb 13 19:33:32.319346 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 19:33:32.329908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:32.416287 sshd[2346]: Accepted publickey for core from 10.200.16.10 port 57150 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:33:32.417619 sshd-session[2346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:32.422898 systemd-logind[1711]: New session 4 of user core. Feb 13 19:33:32.433888 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:33:32.628399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:32.642047 (kubelet)[2357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:33:32.677020 kubelet[2357]: E0213 19:33:32.676955 2357 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:33:32.679530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:33:32.679826 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:33:32.680328 systemd[1]: kubelet.service: Consumed 120ms CPU time, 94.1M memory peak. Feb 13 19:33:32.760758 sshd[2351]: Connection closed by 10.200.16.10 port 57150 Feb 13 19:33:32.761345 sshd-session[2346]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:32.764919 systemd[1]: sshd@1-10.200.20.38:22-10.200.16.10:57150.service: Deactivated successfully. Feb 13 19:33:32.766543 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:33:32.767242 systemd-logind[1711]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:33:32.768361 systemd-logind[1711]: Removed session 4. Feb 13 19:33:32.846111 systemd[1]: Started sshd@2-10.200.20.38:22-10.200.16.10:57154.service - OpenSSH per-connection server daemon (10.200.16.10:57154). Feb 13 19:33:33.299854 sshd[2369]: Accepted publickey for core from 10.200.16.10 port 57154 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:33:33.301207 sshd-session[2369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:33.305309 systemd-logind[1711]: New session 5 of user core. Feb 13 19:33:33.315940 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:33:33.621848 sshd[2371]: Connection closed by 10.200.16.10 port 57154 Feb 13 19:33:33.622371 sshd-session[2369]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:33.626159 systemd[1]: sshd@2-10.200.20.38:22-10.200.16.10:57154.service: Deactivated successfully. Feb 13 19:33:33.627835 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:33:33.628481 systemd-logind[1711]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:33:33.629359 systemd-logind[1711]: Removed session 5. Feb 13 19:33:33.714958 systemd[1]: Started sshd@3-10.200.20.38:22-10.200.16.10:57166.service - OpenSSH per-connection server daemon (10.200.16.10:57166). Feb 13 19:33:34.197267 sshd[2377]: Accepted publickey for core from 10.200.16.10 port 57166 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:33:34.198643 sshd-session[2377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:34.203054 systemd-logind[1711]: New session 6 of user core. Feb 13 19:33:34.211855 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:33:34.544727 sshd[2379]: Connection closed by 10.200.16.10 port 57166 Feb 13 19:33:34.545228 sshd-session[2377]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:34.548969 systemd[1]: sshd@3-10.200.20.38:22-10.200.16.10:57166.service: Deactivated successfully. Feb 13 19:33:34.550532 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:33:34.551177 systemd-logind[1711]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:33:34.552236 systemd-logind[1711]: Removed session 6. Feb 13 19:33:34.638945 systemd[1]: Started sshd@4-10.200.20.38:22-10.200.16.10:57176.service - OpenSSH per-connection server daemon (10.200.16.10:57176). Feb 13 19:33:35.082341 sshd[2385]: Accepted publickey for core from 10.200.16.10 port 57176 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:33:35.083595 sshd-session[2385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:35.087372 systemd-logind[1711]: New session 7 of user core. Feb 13 19:33:35.096854 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:33:35.449088 sudo[2388]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:33:35.449360 sudo[2388]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:33:35.489488 sudo[2388]: pam_unix(sudo:session): session closed for user root Feb 13 19:33:35.563062 sshd[2387]: Connection closed by 10.200.16.10 port 57176 Feb 13 19:33:35.563836 sshd-session[2385]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:35.566898 systemd-logind[1711]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:33:35.567169 systemd[1]: sshd@4-10.200.20.38:22-10.200.16.10:57176.service: Deactivated successfully. Feb 13 19:33:35.568839 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:33:35.570461 systemd-logind[1711]: Removed session 7. Feb 13 19:33:35.647077 systemd[1]: Started sshd@5-10.200.20.38:22-10.200.16.10:57186.service - OpenSSH per-connection server daemon (10.200.16.10:57186). Feb 13 19:33:36.074442 sshd[2394]: Accepted publickey for core from 10.200.16.10 port 57186 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:33:36.075780 sshd-session[2394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:36.079817 systemd-logind[1711]: New session 8 of user core. Feb 13 19:33:36.088883 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:33:36.318837 sudo[2398]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:33:36.319439 sudo[2398]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:33:36.322562 sudo[2398]: pam_unix(sudo:session): session closed for user root Feb 13 19:33:36.327410 sudo[2397]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:33:36.327925 sudo[2397]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:33:36.341563 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:33:36.364314 augenrules[2420]: No rules Feb 13 19:33:36.364921 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:33:36.365110 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:33:36.368010 sudo[2397]: pam_unix(sudo:session): session closed for user root Feb 13 19:33:36.436046 sshd[2396]: Connection closed by 10.200.16.10 port 57186 Feb 13 19:33:36.436405 sshd-session[2394]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:36.441100 systemd-logind[1711]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:33:36.441319 systemd[1]: sshd@5-10.200.20.38:22-10.200.16.10:57186.service: Deactivated successfully. Feb 13 19:33:36.444077 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:33:36.444987 systemd-logind[1711]: Removed session 8. Feb 13 19:33:36.533013 systemd[1]: Started sshd@6-10.200.20.38:22-10.200.16.10:57194.service - OpenSSH per-connection server daemon (10.200.16.10:57194). Feb 13 19:33:37.020189 sshd[2429]: Accepted publickey for core from 10.200.16.10 port 57194 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:33:37.021468 sshd-session[2429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:37.026403 systemd-logind[1711]: New session 9 of user core. Feb 13 19:33:37.031874 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:33:37.294786 sudo[2432]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:33:37.295065 sudo[2432]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:33:38.463959 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:33:38.464092 (dockerd)[2449]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:33:39.283600 dockerd[2449]: time="2025-02-13T19:33:39.283542998Z" level=info msg="Starting up" Feb 13 19:33:39.722231 dockerd[2449]: time="2025-02-13T19:33:39.722020570Z" level=info msg="Loading containers: start." Feb 13 19:33:39.915728 kernel: Initializing XFRM netlink socket Feb 13 19:33:40.066619 systemd-networkd[1436]: docker0: Link UP Feb 13 19:33:40.110984 dockerd[2449]: time="2025-02-13T19:33:40.110936370Z" level=info msg="Loading containers: done." Feb 13 19:33:40.132508 dockerd[2449]: time="2025-02-13T19:33:40.132459850Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:33:40.132658 dockerd[2449]: time="2025-02-13T19:33:40.132572690Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:33:40.132767 dockerd[2449]: time="2025-02-13T19:33:40.132742051Z" level=info msg="Daemon has completed initialization" Feb 13 19:33:40.185385 dockerd[2449]: time="2025-02-13T19:33:40.185265468Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:33:40.185891 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:33:41.180223 containerd[1773]: time="2025-02-13T19:33:41.180152510Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 19:33:42.024918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2963772331.mount: Deactivated successfully. Feb 13 19:33:42.875269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 19:33:42.884048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:43.002933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:43.006209 (kubelet)[2696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:33:43.056008 kubelet[2696]: E0213 19:33:43.055533 2696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:33:43.058777 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:33:43.058919 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:33:43.060972 systemd[1]: kubelet.service: Consumed 133ms CPU time, 94.2M memory peak. Feb 13 19:33:43.401132 containerd[1773]: time="2025-02-13T19:33:43.401082944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:43.407263 containerd[1773]: time="2025-02-13T19:33:43.407196515Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620375" Feb 13 19:33:43.411538 containerd[1773]: time="2025-02-13T19:33:43.411478763Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:43.416233 containerd[1773]: time="2025-02-13T19:33:43.416166412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:43.417303 containerd[1773]: time="2025-02-13T19:33:43.417258894Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 2.237065344s" Feb 13 19:33:43.417303 containerd[1773]: time="2025-02-13T19:33:43.417303454Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 19:33:43.418899 containerd[1773]: time="2025-02-13T19:33:43.418672776Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 19:33:44.741021 containerd[1773]: time="2025-02-13T19:33:44.740968625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:44.745049 containerd[1773]: time="2025-02-13T19:33:44.744995593Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471773" Feb 13 19:33:44.749861 containerd[1773]: time="2025-02-13T19:33:44.749804642Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:44.758651 containerd[1773]: time="2025-02-13T19:33:44.758582378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:44.759762 containerd[1773]: time="2025-02-13T19:33:44.759715940Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 1.340966683s" Feb 13 19:33:44.759980 containerd[1773]: time="2025-02-13T19:33:44.759883421Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 19:33:44.760526 containerd[1773]: time="2025-02-13T19:33:44.760497262Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 19:33:45.823955 containerd[1773]: time="2025-02-13T19:33:45.823897029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:45.827420 containerd[1773]: time="2025-02-13T19:33:45.827351834Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024540" Feb 13 19:33:45.832557 containerd[1773]: time="2025-02-13T19:33:45.831530679Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:45.838790 containerd[1773]: time="2025-02-13T19:33:45.838736809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:45.839875 containerd[1773]: time="2025-02-13T19:33:45.839841210Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.079306028s" Feb 13 19:33:45.839994 containerd[1773]: time="2025-02-13T19:33:45.839979371Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 19:33:45.841007 containerd[1773]: time="2025-02-13T19:33:45.840982492Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:33:47.444066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3714678135.mount: Deactivated successfully. Feb 13 19:33:47.809809 containerd[1773]: time="2025-02-13T19:33:47.809400141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:47.814115 containerd[1773]: time="2025-02-13T19:33:47.813921467Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256" Feb 13 19:33:47.817265 containerd[1773]: time="2025-02-13T19:33:47.817212832Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:47.821483 containerd[1773]: time="2025-02-13T19:33:47.821424877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:47.822208 containerd[1773]: time="2025-02-13T19:33:47.821997718Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.980880346s" Feb 13 19:33:47.822208 containerd[1773]: time="2025-02-13T19:33:47.822035038Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 19:33:47.822777 containerd[1773]: time="2025-02-13T19:33:47.822747879Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:33:48.499112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3209195443.mount: Deactivated successfully. Feb 13 19:33:49.618736 containerd[1773]: time="2025-02-13T19:33:49.618265979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:49.625250 containerd[1773]: time="2025-02-13T19:33:49.625198309Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 19:33:49.630126 containerd[1773]: time="2025-02-13T19:33:49.630087395Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:49.638820 containerd[1773]: time="2025-02-13T19:33:49.638773327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:49.640987 containerd[1773]: time="2025-02-13T19:33:49.640944089Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.81815889s" Feb 13 19:33:49.641267 containerd[1773]: time="2025-02-13T19:33:49.641131850Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:33:49.641912 containerd[1773]: time="2025-02-13T19:33:49.641714810Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:33:50.263135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount756689930.mount: Deactivated successfully. Feb 13 19:33:50.291740 containerd[1773]: time="2025-02-13T19:33:50.291196111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:50.294089 containerd[1773]: time="2025-02-13T19:33:50.294038235Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 19:33:50.299530 containerd[1773]: time="2025-02-13T19:33:50.299492322Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:50.305201 containerd[1773]: time="2025-02-13T19:33:50.305154250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:50.305974 containerd[1773]: time="2025-02-13T19:33:50.305865531Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 664.118921ms" Feb 13 19:33:50.306663 containerd[1773]: time="2025-02-13T19:33:50.306641812Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 19:33:50.307300 containerd[1773]: time="2025-02-13T19:33:50.307172693Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 19:33:51.029983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount591551801.mount: Deactivated successfully. Feb 13 19:33:53.125767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 19:33:53.133973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:53.236855 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:53.240116 (kubelet)[2826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:33:53.289505 kubelet[2826]: E0213 19:33:53.289463 2826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:33:53.293510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:33:53.293649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:33:53.293937 systemd[1]: kubelet.service: Consumed 122ms CPU time, 92.4M memory peak. Feb 13 19:33:53.756067 containerd[1773]: time="2025-02-13T19:33:53.755999225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:53.759766 containerd[1773]: time="2025-02-13T19:33:53.759711912Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Feb 13 19:33:53.763239 containerd[1773]: time="2025-02-13T19:33:53.763183319Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:53.772380 containerd[1773]: time="2025-02-13T19:33:53.771853334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:53.773360 containerd[1773]: time="2025-02-13T19:33:53.773324137Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.465847684s" Feb 13 19:33:53.773462 containerd[1773]: time="2025-02-13T19:33:53.773445777Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 19:33:58.159453 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:58.159984 systemd[1]: kubelet.service: Consumed 122ms CPU time, 92.4M memory peak. Feb 13 19:33:58.172185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:58.194178 systemd[1]: Reload requested from client PID 2862 ('systemctl') (unit session-9.scope)... Feb 13 19:33:58.194347 systemd[1]: Reloading... Feb 13 19:33:58.318782 zram_generator::config[2912]: No configuration found. Feb 13 19:33:58.421382 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:33:58.523426 systemd[1]: Reloading finished in 328 ms. Feb 13 19:33:58.559029 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:58.563581 (kubelet)[2966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:33:58.567339 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:58.568248 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:33:58.569870 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:58.569920 systemd[1]: kubelet.service: Consumed 84ms CPU time, 83.6M memory peak. Feb 13 19:33:58.576015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:58.671655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:58.681982 (kubelet)[2982]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:33:58.715720 kubelet[2982]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:33:58.715720 kubelet[2982]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:33:58.715720 kubelet[2982]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:33:58.715720 kubelet[2982]: I0213 19:33:58.714956 2982 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:33:59.431187 kubelet[2982]: I0213 19:33:59.431151 2982 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:33:59.431332 kubelet[2982]: I0213 19:33:59.431323 2982 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:33:59.431625 kubelet[2982]: I0213 19:33:59.431613 2982 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:33:59.451892 kubelet[2982]: E0213 19:33:59.451841 2982 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:59.452474 kubelet[2982]: I0213 19:33:59.452447 2982 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:33:59.458149 kubelet[2982]: E0213 19:33:59.458110 2982 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:33:59.458149 kubelet[2982]: I0213 19:33:59.458144 2982 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:33:59.463380 kubelet[2982]: I0213 19:33:59.463121 2982 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:33:59.463916 kubelet[2982]: I0213 19:33:59.463896 2982 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:33:59.464069 kubelet[2982]: I0213 19:33:59.464040 2982 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:33:59.464279 kubelet[2982]: I0213 19:33:59.464070 2982 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.0.1-a-2ba2208742","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:33:59.464360 kubelet[2982]: I0213 19:33:59.464289 2982 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:33:59.464360 kubelet[2982]: I0213 19:33:59.464298 2982 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:33:59.464443 kubelet[2982]: I0213 19:33:59.464423 2982 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:33:59.466051 kubelet[2982]: I0213 19:33:59.466032 2982 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:33:59.466086 kubelet[2982]: I0213 19:33:59.466060 2982 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:33:59.466107 kubelet[2982]: I0213 19:33:59.466086 2982 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:33:59.466107 kubelet[2982]: I0213 19:33:59.466096 2982 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:33:59.470796 kubelet[2982]: W0213 19:33:59.470486 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-2ba2208742&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 13 19:33:59.470796 kubelet[2982]: E0213 19:33:59.470543 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-2ba2208742&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:59.472823 kubelet[2982]: W0213 19:33:59.472529 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 13 19:33:59.472823 kubelet[2982]: E0213 19:33:59.472585 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:59.472823 kubelet[2982]: I0213 19:33:59.472690 2982 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:33:59.474362 kubelet[2982]: I0213 19:33:59.474340 2982 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:33:59.475043 kubelet[2982]: W0213 19:33:59.475027 2982 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:33:59.475783 kubelet[2982]: I0213 19:33:59.475657 2982 server.go:1269] "Started kubelet" Feb 13 19:33:59.477088 kubelet[2982]: I0213 19:33:59.477072 2982 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:33:59.483098 kubelet[2982]: E0213 19:33:59.481688 2982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.0.1-a-2ba2208742.1823db83a0886107 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.1-a-2ba2208742,UID:ci-4230.0.1-a-2ba2208742,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.0.1-a-2ba2208742,},FirstTimestamp:2025-02-13 19:33:59.475634439 +0000 UTC m=+0.790742612,LastTimestamp:2025-02-13 19:33:59.475634439 +0000 UTC m=+0.790742612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.1-a-2ba2208742,}" Feb 13 19:33:59.483098 kubelet[2982]: I0213 19:33:59.477759 2982 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:33:59.483098 kubelet[2982]: I0213 19:33:59.483029 2982 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:33:59.484658 kubelet[2982]: I0213 19:33:59.484627 2982 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:33:59.485924 kubelet[2982]: I0213 19:33:59.477710 2982 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:33:59.487080 kubelet[2982]: E0213 19:33:59.486921 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-2ba2208742\" not found" Feb 13 19:33:59.487080 kubelet[2982]: I0213 19:33:59.486969 2982 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:33:59.487279 kubelet[2982]: I0213 19:33:59.487262 2982 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:33:59.487385 kubelet[2982]: I0213 19:33:59.487375 2982 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:33:59.488235 kubelet[2982]: I0213 19:33:59.487592 2982 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:33:59.488235 kubelet[2982]: W0213 19:33:59.487827 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 13 19:33:59.488235 kubelet[2982]: E0213 19:33:59.487872 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:59.488235 kubelet[2982]: E0213 19:33:59.488065 2982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-2ba2208742?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="200ms" Feb 13 19:33:59.489218 kubelet[2982]: I0213 19:33:59.489188 2982 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:33:59.489359 kubelet[2982]: I0213 19:33:59.489331 2982 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:33:59.489618 kubelet[2982]: E0213 19:33:59.489559 2982 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:33:59.491176 kubelet[2982]: I0213 19:33:59.491140 2982 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:33:59.510099 kubelet[2982]: I0213 19:33:59.510075 2982 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:33:59.510261 kubelet[2982]: I0213 19:33:59.510250 2982 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:33:59.510340 kubelet[2982]: I0213 19:33:59.510332 2982 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:33:59.515078 kubelet[2982]: I0213 19:33:59.515058 2982 policy_none.go:49] "None policy: Start" Feb 13 19:33:59.515838 kubelet[2982]: I0213 19:33:59.515815 2982 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:33:59.515932 kubelet[2982]: I0213 19:33:59.515863 2982 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:33:59.527066 kubelet[2982]: I0213 19:33:59.526616 2982 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:33:59.529542 kubelet[2982]: I0213 19:33:59.529515 2982 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:33:59.530740 kubelet[2982]: I0213 19:33:59.530608 2982 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:33:59.530740 kubelet[2982]: I0213 19:33:59.530637 2982 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:33:59.530740 kubelet[2982]: E0213 19:33:59.530678 2982 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:33:59.531728 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:33:59.534875 kubelet[2982]: W0213 19:33:59.534758 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 13 19:33:59.535037 kubelet[2982]: E0213 19:33:59.535016 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:59.545803 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:33:59.549068 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:33:59.560615 kubelet[2982]: I0213 19:33:59.560586 2982 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:33:59.560833 kubelet[2982]: I0213 19:33:59.560817 2982 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:33:59.560881 kubelet[2982]: I0213 19:33:59.560832 2982 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:33:59.561704 kubelet[2982]: I0213 19:33:59.561672 2982 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:33:59.563859 kubelet[2982]: E0213 19:33:59.563644 2982 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.0.1-a-2ba2208742\" not found" Feb 13 19:33:59.641054 systemd[1]: Created slice kubepods-burstable-pod7c18b33dc2ea853922a19cb031b2a973.slice - libcontainer container kubepods-burstable-pod7c18b33dc2ea853922a19cb031b2a973.slice. Feb 13 19:33:59.660302 systemd[1]: Created slice kubepods-burstable-pod7a83cad0123db877571da306a40bd98b.slice - libcontainer container kubepods-burstable-pod7a83cad0123db877571da306a40bd98b.slice. Feb 13 19:33:59.662931 kubelet[2982]: I0213 19:33:59.662893 2982 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-2ba2208742" Feb 13 19:33:59.663263 kubelet[2982]: E0213 19:33:59.663230 2982 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4230.0.1-a-2ba2208742" Feb 13 19:33:59.677024 systemd[1]: Created slice kubepods-burstable-pode0287df5729a5d2922eb12463f0dd5e6.slice - libcontainer container kubepods-burstable-pode0287df5729a5d2922eb12463f0dd5e6.slice. Feb 13 19:33:59.688603 kubelet[2982]: E0213 19:33:59.688487 2982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-2ba2208742?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="400ms" Feb 13 19:33:59.788421 kubelet[2982]: I0213 19:33:59.788386 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e0287df5729a5d2922eb12463f0dd5e6-kubeconfig\") pod \"kube-scheduler-ci-4230.0.1-a-2ba2208742\" (UID: \"e0287df5729a5d2922eb12463f0dd5e6\") " pod="kube-system/kube-scheduler-ci-4230.0.1-a-2ba2208742" Feb 13 19:33:59.788421 kubelet[2982]: I0213 19:33:59.788430 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c18b33dc2ea853922a19cb031b2a973-ca-certs\") pod \"kube-apiserver-ci-4230.0.1-a-2ba2208742\" (UID: \"7c18b33dc2ea853922a19cb031b2a973\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-2ba2208742" Feb 13 19:33:59.788838 kubelet[2982]: I0213 19:33:59.788449 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c18b33dc2ea853922a19cb031b2a973-k8s-certs\") pod \"kube-apiserver-ci-4230.0.1-a-2ba2208742\" (UID: \"7c18b33dc2ea853922a19cb031b2a973\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-2ba2208742" Feb 13 19:33:59.788838 kubelet[2982]: I0213 19:33:59.788464 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a83cad0123db877571da306a40bd98b-ca-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-2ba2208742\" (UID: \"7a83cad0123db877571da306a40bd98b\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-2ba2208742" Feb 13 19:33:59.788838 kubelet[2982]: I0213 19:33:59.788480 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a83cad0123db877571da306a40bd98b-k8s-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-2ba2208742\" (UID: \"7a83cad0123db877571da306a40bd98b\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-2ba2208742" Feb 13 19:33:59.788838 kubelet[2982]: I0213 19:33:59.788495 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a83cad0123db877571da306a40bd98b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.0.1-a-2ba2208742\" (UID: \"7a83cad0123db877571da306a40bd98b\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-2ba2208742" Feb 13 19:33:59.788838 kubelet[2982]: I0213 19:33:59.788513 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c18b33dc2ea853922a19cb031b2a973-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.0.1-a-2ba2208742\" (UID: \"7c18b33dc2ea853922a19cb031b2a973\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-2ba2208742" Feb 13 19:33:59.788945 kubelet[2982]: I0213 19:33:59.788529 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7a83cad0123db877571da306a40bd98b-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.0.1-a-2ba2208742\" (UID: \"7a83cad0123db877571da306a40bd98b\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-2ba2208742" Feb 13 19:33:59.788945 kubelet[2982]: I0213 19:33:59.788549 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a83cad0123db877571da306a40bd98b-kubeconfig\") pod \"kube-controller-manager-ci-4230.0.1-a-2ba2208742\" (UID: \"7a83cad0123db877571da306a40bd98b\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-2ba2208742" Feb 13 19:33:59.865157 kubelet[2982]: I0213 19:33:59.865125 2982 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-2ba2208742" Feb 13 19:33:59.865577 kubelet[2982]: E0213 19:33:59.865545 2982 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4230.0.1-a-2ba2208742" Feb 13 19:33:59.959476 containerd[1773]: time="2025-02-13T19:33:59.959066966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.0.1-a-2ba2208742,Uid:7c18b33dc2ea853922a19cb031b2a973,Namespace:kube-system,Attempt:0,}" Feb 13 19:33:59.975300 containerd[1773]: time="2025-02-13T19:33:59.975242275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.0.1-a-2ba2208742,Uid:7a83cad0123db877571da306a40bd98b,Namespace:kube-system,Attempt:0,}" Feb 13 19:33:59.980345 containerd[1773]: time="2025-02-13T19:33:59.980115284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.0.1-a-2ba2208742,Uid:e0287df5729a5d2922eb12463f0dd5e6,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:00.089792 kubelet[2982]: E0213 19:34:00.089629 2982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-2ba2208742?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="800ms" Feb 13 19:34:00.267955 kubelet[2982]: I0213 19:34:00.267830 2982 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-2ba2208742" Feb 13 19:34:00.268506 kubelet[2982]: E0213 19:34:00.268426 2982 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4230.0.1-a-2ba2208742" Feb 13 19:34:00.652453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3423800632.mount: Deactivated successfully. Feb 13 19:34:00.683478 kubelet[2982]: W0213 19:34:00.683416 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 13 19:34:00.683675 kubelet[2982]: E0213 19:34:00.683654 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:34:00.690377 containerd[1773]: time="2025-02-13T19:34:00.690318827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:00.704081 containerd[1773]: time="2025-02-13T19:34:00.704018092Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:34:00.709584 containerd[1773]: time="2025-02-13T19:34:00.708828301Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:00.713226 containerd[1773]: time="2025-02-13T19:34:00.713187909Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:00.723905 containerd[1773]: time="2025-02-13T19:34:00.723167648Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:00.726956 containerd[1773]: time="2025-02-13T19:34:00.726906134Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:34:00.733205 containerd[1773]: time="2025-02-13T19:34:00.732856185Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:34:00.739931 containerd[1773]: time="2025-02-13T19:34:00.739896838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:00.740733 containerd[1773]: time="2025-02-13T19:34:00.740677600Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 781.508154ms" Feb 13 19:34:00.749257 containerd[1773]: time="2025-02-13T19:34:00.749194615Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 768.996451ms" Feb 13 19:34:00.754540 containerd[1773]: time="2025-02-13T19:34:00.754347985Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 778.992869ms" Feb 13 19:34:00.890268 kubelet[2982]: E0213 19:34:00.890200 2982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-2ba2208742?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="1.6s" Feb 13 19:34:00.905349 kubelet[2982]: W0213 19:34:00.904923 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 13 19:34:00.905349 kubelet[2982]: E0213 19:34:00.904968 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:34:00.975201 kubelet[2982]: W0213 19:34:00.975137 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-2ba2208742&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 13 19:34:00.975340 kubelet[2982]: E0213 19:34:00.975210 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-2ba2208742&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:34:01.057488 kubelet[2982]: W0213 19:34:01.057427 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Feb 13 19:34:01.057602 kubelet[2982]: E0213 19:34:01.057499 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:34:01.070571 kubelet[2982]: I0213 19:34:01.070530 2982 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-2ba2208742" Feb 13 19:34:01.070866 kubelet[2982]: E0213 19:34:01.070836 2982 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4230.0.1-a-2ba2208742" Feb 13 19:34:01.546004 containerd[1773]: time="2025-02-13T19:34:01.545882597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:01.546004 containerd[1773]: time="2025-02-13T19:34:01.545953157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:01.546004 containerd[1773]: time="2025-02-13T19:34:01.545968877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:01.547092 containerd[1773]: time="2025-02-13T19:34:01.547014839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:01.547333 containerd[1773]: time="2025-02-13T19:34:01.547231280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:01.547333 containerd[1773]: time="2025-02-13T19:34:01.547288360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:01.547333 containerd[1773]: time="2025-02-13T19:34:01.547303560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:01.547435 containerd[1773]: time="2025-02-13T19:34:01.547362120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:01.550456 containerd[1773]: time="2025-02-13T19:34:01.550357765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:01.551329 kubelet[2982]: E0213 19:34:01.551298 2982 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:34:01.553653 containerd[1773]: time="2025-02-13T19:34:01.553174011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:01.553653 containerd[1773]: time="2025-02-13T19:34:01.553206971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:01.553653 containerd[1773]: time="2025-02-13T19:34:01.553310251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:01.572902 systemd[1]: Started cri-containerd-2a48bb13fdf6fa29c240b246a6fb7d28cdff7321d8c5174f78dcaf9c1e7a074c.scope - libcontainer container 2a48bb13fdf6fa29c240b246a6fb7d28cdff7321d8c5174f78dcaf9c1e7a074c. Feb 13 19:34:01.581526 systemd[1]: Started cri-containerd-3f5d5bc10479e49ff6051bed3e8292aef43b03d6fc0f383b3bf80e1e0965eeb9.scope - libcontainer container 3f5d5bc10479e49ff6051bed3e8292aef43b03d6fc0f383b3bf80e1e0965eeb9. Feb 13 19:34:01.582444 systemd[1]: Started cri-containerd-ea3e80657055008d4449dc1246f5d8559881ee30dea1717ab8c8a2f3f5b72913.scope - libcontainer container ea3e80657055008d4449dc1246f5d8559881ee30dea1717ab8c8a2f3f5b72913. Feb 13 19:34:01.637359 containerd[1773]: time="2025-02-13T19:34:01.637317247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.0.1-a-2ba2208742,Uid:7a83cad0123db877571da306a40bd98b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a48bb13fdf6fa29c240b246a6fb7d28cdff7321d8c5174f78dcaf9c1e7a074c\"" Feb 13 19:34:01.640827 containerd[1773]: time="2025-02-13T19:34:01.640794534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.0.1-a-2ba2208742,Uid:e0287df5729a5d2922eb12463f0dd5e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f5d5bc10479e49ff6051bed3e8292aef43b03d6fc0f383b3bf80e1e0965eeb9\"" Feb 13 19:34:01.645599 containerd[1773]: time="2025-02-13T19:34:01.645550583Z" level=info msg="CreateContainer within sandbox \"2a48bb13fdf6fa29c240b246a6fb7d28cdff7321d8c5174f78dcaf9c1e7a074c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:34:01.650409 containerd[1773]: time="2025-02-13T19:34:01.648960229Z" level=info msg="CreateContainer within sandbox \"3f5d5bc10479e49ff6051bed3e8292aef43b03d6fc0f383b3bf80e1e0965eeb9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:34:01.651149 containerd[1773]: time="2025-02-13T19:34:01.650840513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.0.1-a-2ba2208742,Uid:7c18b33dc2ea853922a19cb031b2a973,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea3e80657055008d4449dc1246f5d8559881ee30dea1717ab8c8a2f3f5b72913\"" Feb 13 19:34:01.656751 containerd[1773]: time="2025-02-13T19:34:01.656134283Z" level=info msg="CreateContainer within sandbox \"ea3e80657055008d4449dc1246f5d8559881ee30dea1717ab8c8a2f3f5b72913\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:34:01.764259 containerd[1773]: time="2025-02-13T19:34:01.764214964Z" level=info msg="CreateContainer within sandbox \"ea3e80657055008d4449dc1246f5d8559881ee30dea1717ab8c8a2f3f5b72913\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"67f45382872484b580f21a8a4da31180df49385b4129be66407df1e8d0682707\"" Feb 13 19:34:01.765074 containerd[1773]: time="2025-02-13T19:34:01.765040766Z" level=info msg="StartContainer for \"67f45382872484b580f21a8a4da31180df49385b4129be66407df1e8d0682707\"" Feb 13 19:34:01.769012 containerd[1773]: time="2025-02-13T19:34:01.768908093Z" level=info msg="CreateContainer within sandbox \"2a48bb13fdf6fa29c240b246a6fb7d28cdff7321d8c5174f78dcaf9c1e7a074c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c59f04aa0df10724405492867b44b940d8539ff904d642b5fb117a95bacfa4d3\"" Feb 13 19:34:01.770554 containerd[1773]: time="2025-02-13T19:34:01.769462934Z" level=info msg="StartContainer for \"c59f04aa0df10724405492867b44b940d8539ff904d642b5fb117a95bacfa4d3\"" Feb 13 19:34:01.779005 containerd[1773]: time="2025-02-13T19:34:01.778962711Z" level=info msg="CreateContainer within sandbox \"3f5d5bc10479e49ff6051bed3e8292aef43b03d6fc0f383b3bf80e1e0965eeb9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3e303d73fa4a7541950da7c58bdfdfd94a0b1d965960d4a168870ee4475ae48c\"" Feb 13 19:34:01.779719 containerd[1773]: time="2025-02-13T19:34:01.779664753Z" level=info msg="StartContainer for \"3e303d73fa4a7541950da7c58bdfdfd94a0b1d965960d4a168870ee4475ae48c\"" Feb 13 19:34:01.798966 systemd[1]: Started cri-containerd-67f45382872484b580f21a8a4da31180df49385b4129be66407df1e8d0682707.scope - libcontainer container 67f45382872484b580f21a8a4da31180df49385b4129be66407df1e8d0682707. Feb 13 19:34:01.802546 systemd[1]: Started cri-containerd-c59f04aa0df10724405492867b44b940d8539ff904d642b5fb117a95bacfa4d3.scope - libcontainer container c59f04aa0df10724405492867b44b940d8539ff904d642b5fb117a95bacfa4d3. Feb 13 19:34:01.824895 systemd[1]: Started cri-containerd-3e303d73fa4a7541950da7c58bdfdfd94a0b1d965960d4a168870ee4475ae48c.scope - libcontainer container 3e303d73fa4a7541950da7c58bdfdfd94a0b1d965960d4a168870ee4475ae48c. Feb 13 19:34:01.874885 containerd[1773]: time="2025-02-13T19:34:01.874474809Z" level=info msg="StartContainer for \"c59f04aa0df10724405492867b44b940d8539ff904d642b5fb117a95bacfa4d3\" returns successfully" Feb 13 19:34:01.874885 containerd[1773]: time="2025-02-13T19:34:01.874596810Z" level=info msg="StartContainer for \"67f45382872484b580f21a8a4da31180df49385b4129be66407df1e8d0682707\" returns successfully" Feb 13 19:34:01.883622 containerd[1773]: time="2025-02-13T19:34:01.883558186Z" level=info msg="StartContainer for \"3e303d73fa4a7541950da7c58bdfdfd94a0b1d965960d4a168870ee4475ae48c\" returns successfully" Feb 13 19:34:02.672716 kubelet[2982]: I0213 19:34:02.672677 2982 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-2ba2208742" Feb 13 19:34:04.260639 kubelet[2982]: E0213 19:34:04.260589 2982 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.0.1-a-2ba2208742\" not found" node="ci-4230.0.1-a-2ba2208742" Feb 13 19:34:04.352721 kubelet[2982]: I0213 19:34:04.351565 2982 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.0.1-a-2ba2208742" Feb 13 19:34:04.352721 kubelet[2982]: E0213 19:34:04.351645 2982 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4230.0.1-a-2ba2208742\": node \"ci-4230.0.1-a-2ba2208742\" not found" Feb 13 19:34:04.474072 kubelet[2982]: I0213 19:34:04.473821 2982 apiserver.go:52] "Watching apiserver" Feb 13 19:34:04.487730 kubelet[2982]: I0213 19:34:04.487670 2982 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:34:06.622460 systemd[1]: Reload requested from client PID 3260 ('systemctl') (unit session-9.scope)... Feb 13 19:34:06.622476 systemd[1]: Reloading... Feb 13 19:34:06.732792 zram_generator::config[3310]: No configuration found. Feb 13 19:34:06.840420 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:34:06.953721 systemd[1]: Reloading finished in 330 ms. Feb 13 19:34:06.982820 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:34:06.993129 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:34:06.993336 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:06.993384 systemd[1]: kubelet.service: Consumed 1.154s CPU time, 114.4M memory peak. Feb 13 19:34:06.998346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:34:07.175945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:07.188282 (kubelet)[3370]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:34:07.233062 kubelet[3370]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:34:07.233514 kubelet[3370]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:34:07.233586 kubelet[3370]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:34:07.233840 kubelet[3370]: I0213 19:34:07.233803 3370 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:34:07.244469 kubelet[3370]: I0213 19:34:07.244418 3370 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:34:07.244469 kubelet[3370]: I0213 19:34:07.244456 3370 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:34:07.244760 kubelet[3370]: I0213 19:34:07.244734 3370 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:34:07.246254 kubelet[3370]: I0213 19:34:07.246200 3370 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:34:07.248934 kubelet[3370]: I0213 19:34:07.248372 3370 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:34:07.252261 kubelet[3370]: E0213 19:34:07.252223 3370 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:34:07.252261 kubelet[3370]: I0213 19:34:07.252257 3370 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:34:07.255095 kubelet[3370]: I0213 19:34:07.255070 3370 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:34:07.255244 kubelet[3370]: I0213 19:34:07.255189 3370 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:34:07.255323 kubelet[3370]: I0213 19:34:07.255282 3370 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:34:07.255486 kubelet[3370]: I0213 19:34:07.255315 3370 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.0.1-a-2ba2208742","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:34:07.255555 kubelet[3370]: I0213 19:34:07.255494 3370 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:34:07.255555 kubelet[3370]: I0213 19:34:07.255505 3370 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:34:07.255555 kubelet[3370]: I0213 19:34:07.255536 3370 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:34:07.255665 kubelet[3370]: I0213 19:34:07.255648 3370 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:34:07.255665 kubelet[3370]: I0213 19:34:07.255664 3370 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:34:07.255749 kubelet[3370]: I0213 19:34:07.255687 3370 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:34:07.255749 kubelet[3370]: I0213 19:34:07.255727 3370 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:34:07.259276 kubelet[3370]: I0213 19:34:07.259238 3370 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:34:07.261724 kubelet[3370]: I0213 19:34:07.260804 3370 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:34:07.265070 kubelet[3370]: I0213 19:34:07.265043 3370 server.go:1269] "Started kubelet" Feb 13 19:34:07.270150 kubelet[3370]: I0213 19:34:07.269591 3370 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:34:07.273638 kubelet[3370]: I0213 19:34:07.271379 3370 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:34:07.273638 kubelet[3370]: I0213 19:34:07.271547 3370 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:34:07.273638 kubelet[3370]: I0213 19:34:07.272388 3370 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:34:07.273638 kubelet[3370]: I0213 19:34:07.272578 3370 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:34:07.275815 kubelet[3370]: I0213 19:34:07.275780 3370 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:34:07.277691 kubelet[3370]: I0213 19:34:07.276448 3370 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:34:07.277691 kubelet[3370]: I0213 19:34:07.276560 3370 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:34:07.277691 kubelet[3370]: I0213 19:34:07.277063 3370 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:34:07.284239 kubelet[3370]: E0213 19:34:07.283603 3370 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-2ba2208742\" not found" Feb 13 19:34:07.290118 kubelet[3370]: I0213 19:34:07.289808 3370 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:34:07.293547 kubelet[3370]: I0213 19:34:07.292501 3370 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:34:07.293547 kubelet[3370]: I0213 19:34:07.292526 3370 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:34:07.293547 kubelet[3370]: I0213 19:34:07.292545 3370 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:34:07.293547 kubelet[3370]: E0213 19:34:07.292587 3370 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:34:07.293547 kubelet[3370]: I0213 19:34:07.293355 3370 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:34:07.293547 kubelet[3370]: I0213 19:34:07.293457 3370 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:34:07.313610 kubelet[3370]: E0213 19:34:07.313356 3370 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:34:07.313610 kubelet[3370]: I0213 19:34:07.313491 3370 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:34:07.370183 kubelet[3370]: I0213 19:34:07.370153 3370 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:34:07.370183 kubelet[3370]: I0213 19:34:07.370171 3370 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:34:07.370183 kubelet[3370]: I0213 19:34:07.370191 3370 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:34:07.370377 kubelet[3370]: I0213 19:34:07.370352 3370 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:34:07.370377 kubelet[3370]: I0213 19:34:07.370364 3370 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:34:07.370431 kubelet[3370]: I0213 19:34:07.370381 3370 policy_none.go:49] "None policy: Start" Feb 13 19:34:07.370989 kubelet[3370]: I0213 19:34:07.370972 3370 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:34:07.371036 kubelet[3370]: I0213 19:34:07.370993 3370 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:34:07.371202 kubelet[3370]: I0213 19:34:07.371169 3370 state_mem.go:75] "Updated machine memory state" Feb 13 19:34:07.375378 kubelet[3370]: I0213 19:34:07.375348 3370 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:34:07.375731 kubelet[3370]: I0213 19:34:07.375519 3370 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:34:07.375731 kubelet[3370]: I0213 19:34:07.375536 3370 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:34:07.375919 kubelet[3370]: I0213 19:34:07.375812 3370 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:34:07.416063 kubelet[3370]: W0213 19:34:07.415993 3370 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 19:34:07.416063 kubelet[3370]: W0213 19:34:07.416036 3370 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 19:34:07.416335 kubelet[3370]: W0213 19:34:07.416262 3370 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 19:34:07.478394 kubelet[3370]: I0213 19:34:07.478098 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c18b33dc2ea853922a19cb031b2a973-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.0.1-a-2ba2208742\" (UID: \"7c18b33dc2ea853922a19cb031b2a973\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-2ba2208742" Feb 13 19:34:07.478394 kubelet[3370]: I0213 19:34:07.478140 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7a83cad0123db877571da306a40bd98b-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.0.1-a-2ba2208742\" (UID: \"7a83cad0123db877571da306a40bd98b\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-2ba2208742" Feb 13 19:34:07.478394 kubelet[3370]: I0213 19:34:07.478164 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e0287df5729a5d2922eb12463f0dd5e6-kubeconfig\") pod \"kube-scheduler-ci-4230.0.1-a-2ba2208742\" (UID: \"e0287df5729a5d2922eb12463f0dd5e6\") " pod="kube-system/kube-scheduler-ci-4230.0.1-a-2ba2208742" Feb 13 19:34:07.478394 kubelet[3370]: I0213 19:34:07.478201 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a83cad0123db877571da306a40bd98b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.0.1-a-2ba2208742\" (UID: \"7a83cad0123db877571da306a40bd98b\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-2ba2208742" Feb 13 19:34:07.478394 kubelet[3370]: I0213 19:34:07.478218 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c18b33dc2ea853922a19cb031b2a973-ca-certs\") pod \"kube-apiserver-ci-4230.0.1-a-2ba2208742\" (UID: \"7c18b33dc2ea853922a19cb031b2a973\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-2ba2208742" Feb 13 19:34:07.478632 kubelet[3370]: I0213 19:34:07.478232 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c18b33dc2ea853922a19cb031b2a973-k8s-certs\") pod \"kube-apiserver-ci-4230.0.1-a-2ba2208742\" (UID: \"7c18b33dc2ea853922a19cb031b2a973\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-2ba2208742" Feb 13 19:34:07.478632 kubelet[3370]: I0213 19:34:07.478246 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a83cad0123db877571da306a40bd98b-ca-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-2ba2208742\" (UID: \"7a83cad0123db877571da306a40bd98b\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-2ba2208742" Feb 13 19:34:07.478632 kubelet[3370]: I0213 19:34:07.478263 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a83cad0123db877571da306a40bd98b-k8s-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-2ba2208742\" (UID: \"7a83cad0123db877571da306a40bd98b\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-2ba2208742" Feb 13 19:34:07.478632 kubelet[3370]: I0213 19:34:07.478278 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a83cad0123db877571da306a40bd98b-kubeconfig\") pod \"kube-controller-manager-ci-4230.0.1-a-2ba2208742\" (UID: \"7a83cad0123db877571da306a40bd98b\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-2ba2208742" Feb 13 19:34:07.479758 kubelet[3370]: I0213 19:34:07.479160 3370 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-2ba2208742" Feb 13 19:34:07.497846 kubelet[3370]: I0213 19:34:07.496231 3370 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.0.1-a-2ba2208742" Feb 13 19:34:07.497846 kubelet[3370]: I0213 19:34:07.496322 3370 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.0.1-a-2ba2208742" Feb 13 19:34:07.634222 sudo[3405]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:34:07.634492 sudo[3405]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:34:08.080159 sudo[3405]: pam_unix(sudo:session): session closed for user root Feb 13 19:34:08.257123 kubelet[3370]: I0213 19:34:08.256313 3370 apiserver.go:52] "Watching apiserver" Feb 13 19:34:08.277157 kubelet[3370]: I0213 19:34:08.277106 3370 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:34:08.369776 kubelet[3370]: W0213 19:34:08.369399 3370 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 19:34:08.369776 kubelet[3370]: E0213 19:34:08.369484 3370 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.0.1-a-2ba2208742\" already exists" pod="kube-system/kube-apiserver-ci-4230.0.1-a-2ba2208742" Feb 13 19:34:08.439450 kubelet[3370]: I0213 19:34:08.438581 3370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.0.1-a-2ba2208742" podStartSLOduration=1.438561924 podStartE2EDuration="1.438561924s" podCreationTimestamp="2025-02-13 19:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:08.43666964 +0000 UTC m=+1.245050681" watchObservedRunningTime="2025-02-13 19:34:08.438561924 +0000 UTC m=+1.246942965" Feb 13 19:34:08.439870 kubelet[3370]: I0213 19:34:08.439747 3370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.0.1-a-2ba2208742" podStartSLOduration=1.4397323659999999 podStartE2EDuration="1.439732366s" podCreationTimestamp="2025-02-13 19:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:08.394078521 +0000 UTC m=+1.202459522" watchObservedRunningTime="2025-02-13 19:34:08.439732366 +0000 UTC m=+1.248113407" Feb 13 19:34:08.489056 kubelet[3370]: I0213 19:34:08.488575 3370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.0.1-a-2ba2208742" podStartSLOduration=1.488455417 podStartE2EDuration="1.488455417s" podCreationTimestamp="2025-02-13 19:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:08.473414629 +0000 UTC m=+1.281795670" watchObservedRunningTime="2025-02-13 19:34:08.488455417 +0000 UTC m=+1.296836458" Feb 13 19:34:10.029121 sudo[2432]: pam_unix(sudo:session): session closed for user root Feb 13 19:34:10.102747 sshd[2431]: Connection closed by 10.200.16.10 port 57194 Feb 13 19:34:10.103523 sshd-session[2429]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:10.108234 systemd[1]: sshd@6-10.200.20.38:22-10.200.16.10:57194.service: Deactivated successfully. Feb 13 19:34:10.110626 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:34:10.110873 systemd[1]: session-9.scope: Consumed 6.238s CPU time, 256.8M memory peak. Feb 13 19:34:10.112946 systemd-logind[1711]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:34:10.113861 systemd-logind[1711]: Removed session 9. Feb 13 19:34:12.771419 kubelet[3370]: I0213 19:34:12.771380 3370 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:34:12.771969 containerd[1773]: time="2025-02-13T19:34:12.771729938Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:34:12.772729 kubelet[3370]: I0213 19:34:12.772413 3370 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:34:13.790002 systemd[1]: Created slice kubepods-besteffort-pod22dc098d_6664_45cb_8aba_13a412f54d58.slice - libcontainer container kubepods-besteffort-pod22dc098d_6664_45cb_8aba_13a412f54d58.slice. Feb 13 19:34:13.816329 systemd[1]: Created slice kubepods-burstable-podfdda414c_4112_42e0_baf1_705943264f7c.slice - libcontainer container kubepods-burstable-podfdda414c_4112_42e0_baf1_705943264f7c.slice. Feb 13 19:34:13.826961 kubelet[3370]: I0213 19:34:13.826915 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgh72\" (UniqueName: \"kubernetes.io/projected/22dc098d-6664-45cb-8aba-13a412f54d58-kube-api-access-xgh72\") pod \"kube-proxy-s5scf\" (UID: \"22dc098d-6664-45cb-8aba-13a412f54d58\") " pod="kube-system/kube-proxy-s5scf" Feb 13 19:34:13.826961 kubelet[3370]: I0213 19:34:13.826959 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-etc-cni-netd\") pod \"cilium-spbvl\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " pod="kube-system/cilium-spbvl" Feb 13 19:34:13.827333 kubelet[3370]: I0213 19:34:13.826981 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-cilium-run\") pod \"cilium-spbvl\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " pod="kube-system/cilium-spbvl" Feb 13 19:34:13.827333 kubelet[3370]: I0213 19:34:13.827004 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22dc098d-6664-45cb-8aba-13a412f54d58-lib-modules\") pod \"kube-proxy-s5scf\" (UID: \"22dc098d-6664-45cb-8aba-13a412f54d58\") " pod="kube-system/kube-proxy-s5scf" Feb 13 19:34:13.827333 kubelet[3370]: I0213 19:34:13.827023 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-xtables-lock\") pod \"cilium-spbvl\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " pod="kube-system/cilium-spbvl" Feb 13 19:34:13.827333 kubelet[3370]: I0213 19:34:13.827042 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fdda414c-4112-42e0-baf1-705943264f7c-clustermesh-secrets\") pod \"cilium-spbvl\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " pod="kube-system/cilium-spbvl" Feb 13 19:34:13.827333 kubelet[3370]: I0213 19:34:13.827061 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fdda414c-4112-42e0-baf1-705943264f7c-hubble-tls\") pod \"cilium-spbvl\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " pod="kube-system/cilium-spbvl" Feb 13 19:34:13.827333 kubelet[3370]: I0213 19:34:13.827077 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klhz4\" (UniqueName: \"kubernetes.io/projected/fdda414c-4112-42e0-baf1-705943264f7c-kube-api-access-klhz4\") pod \"cilium-spbvl\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " pod="kube-system/cilium-spbvl" Feb 13 19:34:13.827467 kubelet[3370]: I0213 19:34:13.827097 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/22dc098d-6664-45cb-8aba-13a412f54d58-kube-proxy\") pod \"kube-proxy-s5scf\" (UID: \"22dc098d-6664-45cb-8aba-13a412f54d58\") " pod="kube-system/kube-proxy-s5scf" Feb 13 19:34:13.827467 kubelet[3370]: I0213 19:34:13.827116 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-cni-path\") pod \"cilium-spbvl\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " pod="kube-system/cilium-spbvl" Feb 13 19:34:13.827467 kubelet[3370]: I0213 19:34:13.827136 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fdda414c-4112-42e0-baf1-705943264f7c-cilium-config-path\") pod \"cilium-spbvl\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " pod="kube-system/cilium-spbvl" Feb 13 19:34:13.827467 kubelet[3370]: I0213 19:34:13.827152 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-host-proc-sys-net\") pod \"cilium-spbvl\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " pod="kube-system/cilium-spbvl" Feb 13 19:34:13.827467 kubelet[3370]: I0213 19:34:13.827182 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-bpf-maps\") pod \"cilium-spbvl\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " pod="kube-system/cilium-spbvl" Feb 13 19:34:13.827467 kubelet[3370]: I0213 19:34:13.827199 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-host-proc-sys-kernel\") pod \"cilium-spbvl\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " pod="kube-system/cilium-spbvl" Feb 13 19:34:13.827588 kubelet[3370]: I0213 19:34:13.827222 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22dc098d-6664-45cb-8aba-13a412f54d58-xtables-lock\") pod \"kube-proxy-s5scf\" (UID: \"22dc098d-6664-45cb-8aba-13a412f54d58\") " pod="kube-system/kube-proxy-s5scf" Feb 13 19:34:13.827588 kubelet[3370]: I0213 19:34:13.827237 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-hostproc\") pod \"cilium-spbvl\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " pod="kube-system/cilium-spbvl" Feb 13 19:34:13.827588 kubelet[3370]: I0213 19:34:13.827256 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-cilium-cgroup\") pod \"cilium-spbvl\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " pod="kube-system/cilium-spbvl" Feb 13 19:34:13.827588 kubelet[3370]: I0213 19:34:13.827282 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-lib-modules\") pod \"cilium-spbvl\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " pod="kube-system/cilium-spbvl" Feb 13 19:34:13.915791 systemd[1]: Created slice kubepods-besteffort-pode92e1175_5078_4fdc_aaf1_fed1777808fd.slice - libcontainer container kubepods-besteffort-pode92e1175_5078_4fdc_aaf1_fed1777808fd.slice. Feb 13 19:34:13.929374 kubelet[3370]: I0213 19:34:13.927922 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e92e1175-5078-4fdc-aaf1-fed1777808fd-cilium-config-path\") pod \"cilium-operator-5d85765b45-l2bnf\" (UID: \"e92e1175-5078-4fdc-aaf1-fed1777808fd\") " pod="kube-system/cilium-operator-5d85765b45-l2bnf" Feb 13 19:34:13.929374 kubelet[3370]: I0213 19:34:13.927994 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq9f8\" (UniqueName: \"kubernetes.io/projected/e92e1175-5078-4fdc-aaf1-fed1777808fd-kube-api-access-nq9f8\") pod \"cilium-operator-5d85765b45-l2bnf\" (UID: \"e92e1175-5078-4fdc-aaf1-fed1777808fd\") " pod="kube-system/cilium-operator-5d85765b45-l2bnf" Feb 13 19:34:14.111975 containerd[1773]: time="2025-02-13T19:34:14.111857843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s5scf,Uid:22dc098d-6664-45cb-8aba-13a412f54d58,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:14.128786 containerd[1773]: time="2025-02-13T19:34:14.128529996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-spbvl,Uid:fdda414c-4112-42e0-baf1-705943264f7c,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:14.165957 containerd[1773]: time="2025-02-13T19:34:14.164904710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:14.165957 containerd[1773]: time="2025-02-13T19:34:14.164964590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:14.165957 containerd[1773]: time="2025-02-13T19:34:14.164979710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:14.165957 containerd[1773]: time="2025-02-13T19:34:14.165070070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:14.185217 systemd[1]: Started cri-containerd-02deeb08b176669939436ad78728ebd916b0a000be196c85b305ef1abb7bcc16.scope - libcontainer container 02deeb08b176669939436ad78728ebd916b0a000be196c85b305ef1abb7bcc16. Feb 13 19:34:14.200511 containerd[1773]: time="2025-02-13T19:34:14.200219861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:14.200511 containerd[1773]: time="2025-02-13T19:34:14.200292061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:14.200511 containerd[1773]: time="2025-02-13T19:34:14.200340901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:14.200511 containerd[1773]: time="2025-02-13T19:34:14.200454421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:14.215291 containerd[1773]: time="2025-02-13T19:34:14.215209251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s5scf,Uid:22dc098d-6664-45cb-8aba-13a412f54d58,Namespace:kube-system,Attempt:0,} returns sandbox id \"02deeb08b176669939436ad78728ebd916b0a000be196c85b305ef1abb7bcc16\"" Feb 13 19:34:14.223445 containerd[1773]: time="2025-02-13T19:34:14.223148907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-l2bnf,Uid:e92e1175-5078-4fdc-aaf1-fed1777808fd,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:14.226465 systemd[1]: Started cri-containerd-815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01.scope - libcontainer container 815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01. Feb 13 19:34:14.227557 containerd[1773]: time="2025-02-13T19:34:14.227315916Z" level=info msg="CreateContainer within sandbox \"02deeb08b176669939436ad78728ebd916b0a000be196c85b305ef1abb7bcc16\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:34:14.259878 containerd[1773]: time="2025-02-13T19:34:14.259836181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-spbvl,Uid:fdda414c-4112-42e0-baf1-705943264f7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01\"" Feb 13 19:34:14.271118 containerd[1773]: time="2025-02-13T19:34:14.270769963Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:34:14.333675 containerd[1773]: time="2025-02-13T19:34:14.332043487Z" level=info msg="CreateContainer within sandbox \"02deeb08b176669939436ad78728ebd916b0a000be196c85b305ef1abb7bcc16\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"760cdd3f9bddbb9dbbcc656d9602c4f225892796b7ffba29eab3fe9bf0d75c2d\"" Feb 13 19:34:14.334349 containerd[1773]: time="2025-02-13T19:34:14.334311572Z" level=info msg="StartContainer for \"760cdd3f9bddbb9dbbcc656d9602c4f225892796b7ffba29eab3fe9bf0d75c2d\"" Feb 13 19:34:14.338770 containerd[1773]: time="2025-02-13T19:34:14.338658220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:14.338943 containerd[1773]: time="2025-02-13T19:34:14.338747701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:14.338943 containerd[1773]: time="2025-02-13T19:34:14.338920541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:14.339138 containerd[1773]: time="2025-02-13T19:34:14.339087021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:14.358942 systemd[1]: Started cri-containerd-81b6a5263cd583cf9ac0d49c2c9ed070eaaa81ed5147e0bee6e9fa021a98ec54.scope - libcontainer container 81b6a5263cd583cf9ac0d49c2c9ed070eaaa81ed5147e0bee6e9fa021a98ec54. Feb 13 19:34:14.373915 systemd[1]: Started cri-containerd-760cdd3f9bddbb9dbbcc656d9602c4f225892796b7ffba29eab3fe9bf0d75c2d.scope - libcontainer container 760cdd3f9bddbb9dbbcc656d9602c4f225892796b7ffba29eab3fe9bf0d75c2d. Feb 13 19:34:14.409255 containerd[1773]: time="2025-02-13T19:34:14.409161003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-l2bnf,Uid:e92e1175-5078-4fdc-aaf1-fed1777808fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"81b6a5263cd583cf9ac0d49c2c9ed070eaaa81ed5147e0bee6e9fa021a98ec54\"" Feb 13 19:34:14.429654 containerd[1773]: time="2025-02-13T19:34:14.429602244Z" level=info msg="StartContainer for \"760cdd3f9bddbb9dbbcc656d9602c4f225892796b7ffba29eab3fe9bf0d75c2d\" returns successfully" Feb 13 19:34:15.385856 kubelet[3370]: I0213 19:34:15.385491 3370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s5scf" podStartSLOduration=2.385470693 podStartE2EDuration="2.385470693s" podCreationTimestamp="2025-02-13 19:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:15.38390313 +0000 UTC m=+8.192284171" watchObservedRunningTime="2025-02-13 19:34:15.385470693 +0000 UTC m=+8.193851734" Feb 13 19:34:20.530717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3039956806.mount: Deactivated successfully. Feb 13 19:34:22.875685 containerd[1773]: time="2025-02-13T19:34:22.874822599Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:22.878581 containerd[1773]: time="2025-02-13T19:34:22.878499326Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:34:22.882537 containerd[1773]: time="2025-02-13T19:34:22.882482414Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:22.884755 containerd[1773]: time="2025-02-13T19:34:22.884282777Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.613102173s" Feb 13 19:34:22.884755 containerd[1773]: time="2025-02-13T19:34:22.884329417Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:34:22.886748 containerd[1773]: time="2025-02-13T19:34:22.886654901Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:34:22.887931 containerd[1773]: time="2025-02-13T19:34:22.887894344Z" level=info msg="CreateContainer within sandbox \"815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:34:22.932864 containerd[1773]: time="2025-02-13T19:34:22.932808107Z" level=info msg="CreateContainer within sandbox \"815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0\"" Feb 13 19:34:22.934822 containerd[1773]: time="2025-02-13T19:34:22.933612029Z" level=info msg="StartContainer for \"045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0\"" Feb 13 19:34:22.964914 systemd[1]: Started cri-containerd-045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0.scope - libcontainer container 045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0. Feb 13 19:34:22.993102 containerd[1773]: time="2025-02-13T19:34:22.993036099Z" level=info msg="StartContainer for \"045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0\" returns successfully" Feb 13 19:34:23.003838 systemd[1]: cri-containerd-045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0.scope: Deactivated successfully. Feb 13 19:34:23.917123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0-rootfs.mount: Deactivated successfully. Feb 13 19:34:24.021521 containerd[1773]: time="2025-02-13T19:34:24.021458290Z" level=info msg="shim disconnected" id=045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0 namespace=k8s.io Feb 13 19:34:24.021521 containerd[1773]: time="2025-02-13T19:34:24.021514730Z" level=warning msg="cleaning up after shim disconnected" id=045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0 namespace=k8s.io Feb 13 19:34:24.021521 containerd[1773]: time="2025-02-13T19:34:24.021524490Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:24.385528 containerd[1773]: time="2025-02-13T19:34:24.385479687Z" level=info msg="CreateContainer within sandbox \"815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:34:24.432237 containerd[1773]: time="2025-02-13T19:34:24.432185813Z" level=info msg="CreateContainer within sandbox \"815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2\"" Feb 13 19:34:24.434244 containerd[1773]: time="2025-02-13T19:34:24.434202737Z" level=info msg="StartContainer for \"a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2\"" Feb 13 19:34:24.459664 systemd[1]: Started cri-containerd-a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2.scope - libcontainer container a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2. Feb 13 19:34:24.489170 containerd[1773]: time="2025-02-13T19:34:24.489089519Z" level=info msg="StartContainer for \"a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2\" returns successfully" Feb 13 19:34:24.499533 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:34:24.499762 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:34:24.499976 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:34:24.508079 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:34:24.508279 systemd[1]: cri-containerd-a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2.scope: Deactivated successfully. Feb 13 19:34:24.525754 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:34:24.543972 containerd[1773]: time="2025-02-13T19:34:24.543769021Z" level=info msg="shim disconnected" id=a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2 namespace=k8s.io Feb 13 19:34:24.543972 containerd[1773]: time="2025-02-13T19:34:24.543825461Z" level=warning msg="cleaning up after shim disconnected" id=a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2 namespace=k8s.io Feb 13 19:34:24.543972 containerd[1773]: time="2025-02-13T19:34:24.543833901Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:24.917215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2-rootfs.mount: Deactivated successfully. Feb 13 19:34:25.393285 containerd[1773]: time="2025-02-13T19:34:25.393082919Z" level=info msg="CreateContainer within sandbox \"815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:34:25.444834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount261990470.mount: Deactivated successfully. Feb 13 19:34:25.481443 containerd[1773]: time="2025-02-13T19:34:25.480871562Z" level=info msg="CreateContainer within sandbox \"815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50\"" Feb 13 19:34:25.481583 containerd[1773]: time="2025-02-13T19:34:25.481544803Z" level=info msg="StartContainer for \"c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50\"" Feb 13 19:34:25.523904 systemd[1]: Started cri-containerd-c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50.scope - libcontainer container c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50. Feb 13 19:34:25.553417 systemd[1]: cri-containerd-c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50.scope: Deactivated successfully. Feb 13 19:34:25.558862 containerd[1773]: time="2025-02-13T19:34:25.558817907Z" level=info msg="StartContainer for \"c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50\" returns successfully" Feb 13 19:34:25.599668 containerd[1773]: time="2025-02-13T19:34:25.599548463Z" level=info msg="shim disconnected" id=c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50 namespace=k8s.io Feb 13 19:34:25.599668 containerd[1773]: time="2025-02-13T19:34:25.599616383Z" level=warning msg="cleaning up after shim disconnected" id=c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50 namespace=k8s.io Feb 13 19:34:25.599668 containerd[1773]: time="2025-02-13T19:34:25.599627503Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:25.917356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50-rootfs.mount: Deactivated successfully. Feb 13 19:34:26.402814 containerd[1773]: time="2025-02-13T19:34:26.402119196Z" level=info msg="CreateContainer within sandbox \"815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:34:26.444150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1168690606.mount: Deactivated successfully. Feb 13 19:34:26.457082 containerd[1773]: time="2025-02-13T19:34:26.457040139Z" level=info msg="CreateContainer within sandbox \"815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8\"" Feb 13 19:34:26.460744 containerd[1773]: time="2025-02-13T19:34:26.460607945Z" level=info msg="StartContainer for \"b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8\"" Feb 13 19:34:26.502503 systemd[1]: Started cri-containerd-b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8.scope - libcontainer container b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8. Feb 13 19:34:26.538808 systemd[1]: cri-containerd-b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8.scope: Deactivated successfully. Feb 13 19:34:26.546081 containerd[1773]: time="2025-02-13T19:34:26.546029944Z" level=info msg="StartContainer for \"b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8\" returns successfully" Feb 13 19:34:26.546345 containerd[1773]: time="2025-02-13T19:34:26.545851424Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdda414c_4112_42e0_baf1_705943264f7c.slice/cri-containerd-b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8.scope/memory.events\": no such file or directory" Feb 13 19:34:26.798475 containerd[1773]: time="2025-02-13T19:34:26.798143173Z" level=info msg="shim disconnected" id=b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8 namespace=k8s.io Feb 13 19:34:26.798475 containerd[1773]: time="2025-02-13T19:34:26.798212333Z" level=warning msg="cleaning up after shim disconnected" id=b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8 namespace=k8s.io Feb 13 19:34:26.798475 containerd[1773]: time="2025-02-13T19:34:26.798226733Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:26.810624 containerd[1773]: time="2025-02-13T19:34:26.810447796Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:34:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:34:26.917348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8-rootfs.mount: Deactivated successfully. Feb 13 19:34:26.936885 containerd[1773]: time="2025-02-13T19:34:26.936777511Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:26.941728 containerd[1773]: time="2025-02-13T19:34:26.941573480Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:34:26.945976 containerd[1773]: time="2025-02-13T19:34:26.945553848Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:26.947555 containerd[1773]: time="2025-02-13T19:34:26.947460571Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.06071227s" Feb 13 19:34:26.947820 containerd[1773]: time="2025-02-13T19:34:26.947689732Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:34:26.952343 containerd[1773]: time="2025-02-13T19:34:26.952133460Z" level=info msg="CreateContainer within sandbox \"81b6a5263cd583cf9ac0d49c2c9ed070eaaa81ed5147e0bee6e9fa021a98ec54\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:34:26.996777 containerd[1773]: time="2025-02-13T19:34:26.995897461Z" level=info msg="CreateContainer within sandbox \"81b6a5263cd583cf9ac0d49c2c9ed070eaaa81ed5147e0bee6e9fa021a98ec54\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076\"" Feb 13 19:34:26.998207 containerd[1773]: time="2025-02-13T19:34:26.997096424Z" level=info msg="StartContainer for \"22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076\"" Feb 13 19:34:27.027917 systemd[1]: Started cri-containerd-22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076.scope - libcontainer container 22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076. Feb 13 19:34:27.056392 containerd[1773]: time="2025-02-13T19:34:27.056270094Z" level=info msg="StartContainer for \"22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076\" returns successfully" Feb 13 19:34:27.406957 containerd[1773]: time="2025-02-13T19:34:27.406886506Z" level=info msg="CreateContainer within sandbox \"815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:34:27.468026 containerd[1773]: time="2025-02-13T19:34:27.467977460Z" level=info msg="CreateContainer within sandbox \"815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0\"" Feb 13 19:34:27.468943 containerd[1773]: time="2025-02-13T19:34:27.468904302Z" level=info msg="StartContainer for \"029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0\"" Feb 13 19:34:27.511919 systemd[1]: Started cri-containerd-029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0.scope - libcontainer container 029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0. Feb 13 19:34:27.585408 containerd[1773]: time="2025-02-13T19:34:27.585200238Z" level=info msg="StartContainer for \"029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0\" returns successfully" Feb 13 19:34:27.809321 kubelet[3370]: I0213 19:34:27.808370 3370 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:34:27.907155 kubelet[3370]: I0213 19:34:27.907099 3370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-l2bnf" podStartSLOduration=2.371199314 podStartE2EDuration="14.907079277s" podCreationTimestamp="2025-02-13 19:34:13 +0000 UTC" firstStartedPulling="2025-02-13 19:34:14.41264293 +0000 UTC m=+7.221023971" lastFinishedPulling="2025-02-13 19:34:26.948522893 +0000 UTC m=+19.756903934" observedRunningTime="2025-02-13 19:34:27.45191799 +0000 UTC m=+20.260299111" watchObservedRunningTime="2025-02-13 19:34:27.907079277 +0000 UTC m=+20.715460318" Feb 13 19:34:27.921605 kubelet[3370]: W0213 19:34:27.921516 3370 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4230.0.1-a-2ba2208742" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.0.1-a-2ba2208742' and this object Feb 13 19:34:27.921850 kubelet[3370]: E0213 19:34:27.921819 3370 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4230.0.1-a-2ba2208742\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.0.1-a-2ba2208742' and this object" logger="UnhandledError" Feb 13 19:34:27.927313 systemd[1]: Created slice kubepods-burstable-podfda852a2_a56e_418f_83ac_87519de72155.slice - libcontainer container kubepods-burstable-podfda852a2_a56e_418f_83ac_87519de72155.slice. Feb 13 19:34:27.937183 systemd[1]: Created slice kubepods-burstable-podbcd6d42a_9aa2_44e5_ab0a_77260794d955.slice - libcontainer container kubepods-burstable-podbcd6d42a_9aa2_44e5_ab0a_77260794d955.slice. Feb 13 19:34:28.020994 kubelet[3370]: I0213 19:34:28.020845 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcd6d42a-9aa2-44e5-ab0a-77260794d955-config-volume\") pod \"coredns-6f6b679f8f-dgbh2\" (UID: \"bcd6d42a-9aa2-44e5-ab0a-77260794d955\") " pod="kube-system/coredns-6f6b679f8f-dgbh2" Feb 13 19:34:28.020994 kubelet[3370]: I0213 19:34:28.020885 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp8pl\" (UniqueName: \"kubernetes.io/projected/bcd6d42a-9aa2-44e5-ab0a-77260794d955-kube-api-access-rp8pl\") pod \"coredns-6f6b679f8f-dgbh2\" (UID: \"bcd6d42a-9aa2-44e5-ab0a-77260794d955\") " pod="kube-system/coredns-6f6b679f8f-dgbh2" Feb 13 19:34:28.020994 kubelet[3370]: I0213 19:34:28.020921 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jw4g\" (UniqueName: \"kubernetes.io/projected/fda852a2-a56e-418f-83ac-87519de72155-kube-api-access-6jw4g\") pod \"coredns-6f6b679f8f-2nlt4\" (UID: \"fda852a2-a56e-418f-83ac-87519de72155\") " pod="kube-system/coredns-6f6b679f8f-2nlt4" Feb 13 19:34:28.020994 kubelet[3370]: I0213 19:34:28.020938 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fda852a2-a56e-418f-83ac-87519de72155-config-volume\") pod \"coredns-6f6b679f8f-2nlt4\" (UID: \"fda852a2-a56e-418f-83ac-87519de72155\") " pod="kube-system/coredns-6f6b679f8f-2nlt4" Feb 13 19:34:29.134896 containerd[1773]: time="2025-02-13T19:34:29.134851002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2nlt4,Uid:fda852a2-a56e-418f-83ac-87519de72155,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:29.144226 containerd[1773]: time="2025-02-13T19:34:29.144188539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dgbh2,Uid:bcd6d42a-9aa2-44e5-ab0a-77260794d955,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:30.842268 systemd-networkd[1436]: cilium_host: Link UP Feb 13 19:34:30.842376 systemd-networkd[1436]: cilium_net: Link UP Feb 13 19:34:30.842485 systemd-networkd[1436]: cilium_net: Gained carrier Feb 13 19:34:30.842587 systemd-networkd[1436]: cilium_host: Gained carrier Feb 13 19:34:30.998256 systemd-networkd[1436]: cilium_vxlan: Link UP Feb 13 19:34:30.998266 systemd-networkd[1436]: cilium_vxlan: Gained carrier Feb 13 19:34:31.319850 systemd-networkd[1436]: cilium_net: Gained IPv6LL Feb 13 19:34:31.341282 kernel: NET: Registered PF_ALG protocol family Feb 13 19:34:31.551901 systemd-networkd[1436]: cilium_host: Gained IPv6LL Feb 13 19:34:32.046666 systemd-networkd[1436]: lxc_health: Link UP Feb 13 19:34:32.048880 systemd-networkd[1436]: lxc_health: Gained carrier Feb 13 19:34:32.154299 kubelet[3370]: I0213 19:34:32.154227 3370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-spbvl" podStartSLOduration=10.533899634 podStartE2EDuration="19.154211101s" podCreationTimestamp="2025-02-13 19:34:13 +0000 UTC" firstStartedPulling="2025-02-13 19:34:14.264941912 +0000 UTC m=+7.073322913" lastFinishedPulling="2025-02-13 19:34:22.885253299 +0000 UTC m=+15.693634380" observedRunningTime="2025-02-13 19:34:28.449402166 +0000 UTC m=+21.257783207" watchObservedRunningTime="2025-02-13 19:34:32.154211101 +0000 UTC m=+24.962592142" Feb 13 19:34:32.253736 kernel: eth0: renamed from tmp7642e Feb 13 19:34:32.264881 systemd-networkd[1436]: lxc2111f92ef9e3: Link UP Feb 13 19:34:32.265145 systemd-networkd[1436]: lxc2111f92ef9e3: Gained carrier Feb 13 19:34:32.290849 kernel: eth0: renamed from tmp0db09 Feb 13 19:34:32.292785 systemd-networkd[1436]: lxca57038750aba: Link UP Feb 13 19:34:32.295341 systemd-networkd[1436]: lxca57038750aba: Gained carrier Feb 13 19:34:32.575910 systemd-networkd[1436]: cilium_vxlan: Gained IPv6LL Feb 13 19:34:33.151901 systemd-networkd[1436]: lxc_health: Gained IPv6LL Feb 13 19:34:33.600012 systemd-networkd[1436]: lxc2111f92ef9e3: Gained IPv6LL Feb 13 19:34:33.663868 systemd-networkd[1436]: lxca57038750aba: Gained IPv6LL Feb 13 19:34:36.172529 containerd[1773]: time="2025-02-13T19:34:36.172285416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:36.172529 containerd[1773]: time="2025-02-13T19:34:36.172437496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:36.172529 containerd[1773]: time="2025-02-13T19:34:36.172450176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:36.179045 containerd[1773]: time="2025-02-13T19:34:36.173279738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:36.205090 containerd[1773]: time="2025-02-13T19:34:36.204905397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:36.206031 containerd[1773]: time="2025-02-13T19:34:36.205517999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:36.206031 containerd[1773]: time="2025-02-13T19:34:36.205727639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:36.207389 containerd[1773]: time="2025-02-13T19:34:36.206482320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:36.232084 systemd[1]: Started cri-containerd-7642ef3c0706258739643801f74c6af2e5e952a352823b21f5140355880bef30.scope - libcontainer container 7642ef3c0706258739643801f74c6af2e5e952a352823b21f5140355880bef30. Feb 13 19:34:36.253913 systemd[1]: Started cri-containerd-0db0921b8cfa2fcffca475e7d10ff8724ffc6dc1d4204945534c2a31225d5b87.scope - libcontainer container 0db0921b8cfa2fcffca475e7d10ff8724ffc6dc1d4204945534c2a31225d5b87. Feb 13 19:34:36.303214 containerd[1773]: time="2025-02-13T19:34:36.303166303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2nlt4,Uid:fda852a2-a56e-418f-83ac-87519de72155,Namespace:kube-system,Attempt:0,} returns sandbox id \"7642ef3c0706258739643801f74c6af2e5e952a352823b21f5140355880bef30\"" Feb 13 19:34:36.310914 containerd[1773]: time="2025-02-13T19:34:36.310638117Z" level=info msg="CreateContainer within sandbox \"7642ef3c0706258739643801f74c6af2e5e952a352823b21f5140355880bef30\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:34:36.312974 containerd[1773]: time="2025-02-13T19:34:36.312880122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dgbh2,Uid:bcd6d42a-9aa2-44e5-ab0a-77260794d955,Namespace:kube-system,Attempt:0,} returns sandbox id \"0db0921b8cfa2fcffca475e7d10ff8724ffc6dc1d4204945534c2a31225d5b87\"" Feb 13 19:34:36.323072 containerd[1773]: time="2025-02-13T19:34:36.321903379Z" level=info msg="CreateContainer within sandbox \"0db0921b8cfa2fcffca475e7d10ff8724ffc6dc1d4204945534c2a31225d5b87\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:34:36.375416 containerd[1773]: time="2025-02-13T19:34:36.375367040Z" level=info msg="CreateContainer within sandbox \"7642ef3c0706258739643801f74c6af2e5e952a352823b21f5140355880bef30\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"959cb5bce8f17e4fa91073cf211461a4e3d64e1e052efef315c0f90ff558d373\"" Feb 13 19:34:36.377219 containerd[1773]: time="2025-02-13T19:34:36.376473322Z" level=info msg="StartContainer for \"959cb5bce8f17e4fa91073cf211461a4e3d64e1e052efef315c0f90ff558d373\"" Feb 13 19:34:36.391562 containerd[1773]: time="2025-02-13T19:34:36.391506670Z" level=info msg="CreateContainer within sandbox \"0db0921b8cfa2fcffca475e7d10ff8724ffc6dc1d4204945534c2a31225d5b87\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2caece96ce225f84cb5ef6be20d56f828a42b26cb1497ff13a028480d1f990e4\"" Feb 13 19:34:36.393608 containerd[1773]: time="2025-02-13T19:34:36.393563394Z" level=info msg="StartContainer for \"2caece96ce225f84cb5ef6be20d56f828a42b26cb1497ff13a028480d1f990e4\"" Feb 13 19:34:36.407945 systemd[1]: Started cri-containerd-959cb5bce8f17e4fa91073cf211461a4e3d64e1e052efef315c0f90ff558d373.scope - libcontainer container 959cb5bce8f17e4fa91073cf211461a4e3d64e1e052efef315c0f90ff558d373. Feb 13 19:34:36.424911 systemd[1]: Started cri-containerd-2caece96ce225f84cb5ef6be20d56f828a42b26cb1497ff13a028480d1f990e4.scope - libcontainer container 2caece96ce225f84cb5ef6be20d56f828a42b26cb1497ff13a028480d1f990e4. Feb 13 19:34:36.457645 containerd[1773]: time="2025-02-13T19:34:36.457338915Z" level=info msg="StartContainer for \"959cb5bce8f17e4fa91073cf211461a4e3d64e1e052efef315c0f90ff558d373\" returns successfully" Feb 13 19:34:36.473861 containerd[1773]: time="2025-02-13T19:34:36.473807386Z" level=info msg="StartContainer for \"2caece96ce225f84cb5ef6be20d56f828a42b26cb1497ff13a028480d1f990e4\" returns successfully" Feb 13 19:34:37.482436 kubelet[3370]: I0213 19:34:37.482324 3370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-dgbh2" podStartSLOduration=24.482304813 podStartE2EDuration="24.482304813s" podCreationTimestamp="2025-02-13 19:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:37.480010649 +0000 UTC m=+30.288391690" watchObservedRunningTime="2025-02-13 19:34:37.482304813 +0000 UTC m=+30.290685894" Feb 13 19:34:37.536975 kubelet[3370]: I0213 19:34:37.536912 3370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-2nlt4" podStartSLOduration=24.536894556 podStartE2EDuration="24.536894556s" podCreationTimestamp="2025-02-13 19:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:37.506618379 +0000 UTC m=+30.314999380" watchObservedRunningTime="2025-02-13 19:34:37.536894556 +0000 UTC m=+30.345275597" Feb 13 19:37:32.604702 update_engine[1713]: I20250213 19:37:32.604645 1713 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 19:37:32.609419 update_engine[1713]: I20250213 19:37:32.604724 1713 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 19:37:32.609419 update_engine[1713]: I20250213 19:37:32.604876 1713 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 19:37:32.609419 update_engine[1713]: I20250213 19:37:32.605206 1713 omaha_request_params.cc:62] Current group set to alpha Feb 13 19:37:32.609419 update_engine[1713]: I20250213 19:37:32.605293 1713 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 19:37:32.609419 update_engine[1713]: I20250213 19:37:32.605301 1713 update_attempter.cc:643] Scheduling an action processor start. Feb 13 19:37:32.609419 update_engine[1713]: I20250213 19:37:32.605315 1713 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:37:32.609419 update_engine[1713]: I20250213 19:37:32.605345 1713 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 19:37:32.609419 update_engine[1713]: I20250213 19:37:32.605566 1713 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:37:32.609419 update_engine[1713]: I20250213 19:37:32.605583 1713 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Feb 13 19:37:32.609419 update_engine[1713]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Feb 13 19:37:32.609419 update_engine[1713]: <os version="Chateau" platform="CoreOS" sp="4230.0.1_aarch64"></os> Feb 13 19:37:32.609419 update_engine[1713]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4230.0.1" track="alpha" bootid="{88f9a218-d9b9-4977-9748-dcd77812cfbc}" oem="azure" oemversion="2.9.1.1-r3" alephversion="4230.0.1" machineid="edbd69acd2c347d3969f0d22e5f2715d" machinealias="" lang="en-US" board="arm64-usr" hardware_class="" delta_okay="false" > Feb 13 19:37:32.609419 update_engine[1713]: <ping active="1"></ping> Feb 13 19:37:32.609419 update_engine[1713]: <updatecheck></updatecheck> Feb 13 19:37:32.609419 update_engine[1713]: <event eventtype="3" eventresult="2" previousversion="0.0.0.0"></event> Feb 13 19:37:32.609419 update_engine[1713]: </app> Feb 13 19:37:32.609419 update_engine[1713]: </request> Feb 13 19:37:32.609419 update_engine[1713]: I20250213 19:37:32.605590 1713 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:37:32.609419 update_engine[1713]: I20250213 19:37:32.606724 1713 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:37:32.609419 update_engine[1713]: I20250213 19:37:32.607069 1713 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:37:32.609875 locksmithd[1818]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 19:37:32.693995 update_engine[1713]: E20250213 19:37:32.693938 1713 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:37:32.694113 update_engine[1713]: I20250213 19:37:32.694044 1713 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 19:37:42.513981 update_engine[1713]: I20250213 19:37:42.513919 1713 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:37:42.514319 update_engine[1713]: I20250213 19:37:42.514134 1713 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:37:42.514413 update_engine[1713]: I20250213 19:37:42.514358 1713 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:37:42.617443 update_engine[1713]: E20250213 19:37:42.617388 1713 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:37:42.617568 update_engine[1713]: I20250213 19:37:42.617472 1713 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 19:37:52.511801 update_engine[1713]: I20250213 19:37:52.511732 1713 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:37:52.512152 update_engine[1713]: I20250213 19:37:52.511962 1713 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:37:52.512247 update_engine[1713]: I20250213 19:37:52.512212 1713 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:37:52.541119 update_engine[1713]: E20250213 19:37:52.541049 1713 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:37:52.541243 update_engine[1713]: I20250213 19:37:52.541157 1713 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 19:38:02.513831 update_engine[1713]: I20250213 19:38:02.513775 1713 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:38:02.514208 update_engine[1713]: I20250213 19:38:02.513976 1713 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:38:02.514208 update_engine[1713]: I20250213 19:38:02.514186 1713 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:38:02.562202 update_engine[1713]: E20250213 19:38:02.562129 1713 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:38:02.562335 update_engine[1713]: I20250213 19:38:02.562230 1713 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 19:38:02.562335 update_engine[1713]: I20250213 19:38:02.562238 1713 omaha_request_action.cc:617] Omaha request response: Feb 13 19:38:02.562335 update_engine[1713]: E20250213 19:38:02.562325 1713 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 19:38:02.562396 update_engine[1713]: I20250213 19:38:02.562345 1713 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 19:38:02.562396 update_engine[1713]: I20250213 19:38:02.562350 1713 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:38:02.562396 update_engine[1713]: I20250213 19:38:02.562353 1713 update_attempter.cc:306] Processing Done. Feb 13 19:38:02.562396 update_engine[1713]: E20250213 19:38:02.562367 1713 update_attempter.cc:619] Update failed. Feb 13 19:38:02.562396 update_engine[1713]: I20250213 19:38:02.562372 1713 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 19:38:02.562396 update_engine[1713]: I20250213 19:38:02.562376 1713 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 19:38:02.562396 update_engine[1713]: I20250213 19:38:02.562382 1713 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 19:38:02.562540 update_engine[1713]: I20250213 19:38:02.562448 1713 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:38:02.562540 update_engine[1713]: I20250213 19:38:02.562469 1713 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:38:02.562540 update_engine[1713]: I20250213 19:38:02.562475 1713 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Feb 13 19:38:02.562540 update_engine[1713]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Feb 13 19:38:02.562540 update_engine[1713]: <os version="Chateau" platform="CoreOS" sp="4230.0.1_aarch64"></os> Feb 13 19:38:02.562540 update_engine[1713]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4230.0.1" track="alpha" bootid="{88f9a218-d9b9-4977-9748-dcd77812cfbc}" oem="azure" oemversion="2.9.1.1-r3" alephversion="4230.0.1" machineid="edbd69acd2c347d3969f0d22e5f2715d" machinealias="" lang="en-US" board="arm64-usr" hardware_class="" delta_okay="false" > Feb 13 19:38:02.562540 update_engine[1713]: <event eventtype="3" eventresult="0" errorcode="268437456"></event> Feb 13 19:38:02.562540 update_engine[1713]: </app> Feb 13 19:38:02.562540 update_engine[1713]: </request> Feb 13 19:38:02.562540 update_engine[1713]: I20250213 19:38:02.562481 1713 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:38:02.562729 update_engine[1713]: I20250213 19:38:02.562615 1713 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:38:02.563006 update_engine[1713]: I20250213 19:38:02.562857 1713 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:38:02.563058 locksmithd[1818]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 19:38:02.566882 update_engine[1713]: E20250213 19:38:02.566848 1713 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:38:02.566944 update_engine[1713]: I20250213 19:38:02.566905 1713 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 19:38:02.566944 update_engine[1713]: I20250213 19:38:02.566916 1713 omaha_request_action.cc:617] Omaha request response: Feb 13 19:38:02.566944 update_engine[1713]: I20250213 19:38:02.566923 1713 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:38:02.566944 update_engine[1713]: I20250213 19:38:02.566928 1713 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:38:02.566944 update_engine[1713]: I20250213 19:38:02.566933 1713 update_attempter.cc:306] Processing Done. Feb 13 19:38:02.566944 update_engine[1713]: I20250213 19:38:02.566938 1713 update_attempter.cc:310] Error event sent. Feb 13 19:38:02.567082 update_engine[1713]: I20250213 19:38:02.566947 1713 update_check_scheduler.cc:74] Next update check in 40m0s Feb 13 19:38:02.567242 locksmithd[1818]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 19:38:32.032557 systemd[1]: Started sshd@7-10.200.20.38:22-10.200.16.10:48020.service - OpenSSH per-connection server daemon (10.200.16.10:48020). Feb 13 19:38:32.508227 sshd[4789]: Accepted publickey for core from 10.200.16.10 port 48020 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:38:32.509667 sshd-session[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:32.514898 systemd-logind[1711]: New session 10 of user core. Feb 13 19:38:32.518871 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:38:32.930299 sshd[4791]: Connection closed by 10.200.16.10 port 48020 Feb 13 19:38:32.929757 sshd-session[4789]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:32.932797 systemd[1]: sshd@7-10.200.20.38:22-10.200.16.10:48020.service: Deactivated successfully. Feb 13 19:38:32.936012 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:38:32.937566 systemd-logind[1711]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:38:32.939451 systemd-logind[1711]: Removed session 10. Feb 13 19:38:38.012965 systemd[1]: Started sshd@8-10.200.20.38:22-10.200.16.10:48032.service - OpenSSH per-connection server daemon (10.200.16.10:48032). Feb 13 19:38:38.443904 sshd[4804]: Accepted publickey for core from 10.200.16.10 port 48032 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:38:38.445225 sshd-session[4804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:38.450335 systemd-logind[1711]: New session 11 of user core. Feb 13 19:38:38.456944 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:38:38.825588 sshd[4806]: Connection closed by 10.200.16.10 port 48032 Feb 13 19:38:38.826164 sshd-session[4804]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:38.829804 systemd[1]: sshd@8-10.200.20.38:22-10.200.16.10:48032.service: Deactivated successfully. Feb 13 19:38:38.832473 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:38:38.833835 systemd-logind[1711]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:38:38.834777 systemd-logind[1711]: Removed session 11. Feb 13 19:38:43.919378 systemd[1]: Started sshd@9-10.200.20.38:22-10.200.16.10:56280.service - OpenSSH per-connection server daemon (10.200.16.10:56280). Feb 13 19:38:44.364780 sshd[4819]: Accepted publickey for core from 10.200.16.10 port 56280 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:38:44.366043 sshd-session[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:44.370669 systemd-logind[1711]: New session 12 of user core. Feb 13 19:38:44.376899 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:38:44.757646 sshd[4822]: Connection closed by 10.200.16.10 port 56280 Feb 13 19:38:44.758420 sshd-session[4819]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:44.761547 systemd-logind[1711]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:38:44.762275 systemd[1]: sshd@9-10.200.20.38:22-10.200.16.10:56280.service: Deactivated successfully. Feb 13 19:38:44.764634 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:38:44.766652 systemd-logind[1711]: Removed session 12. Feb 13 19:38:49.849962 systemd[1]: Started sshd@10-10.200.20.38:22-10.200.16.10:32894.service - OpenSSH per-connection server daemon (10.200.16.10:32894). Feb 13 19:38:50.337172 sshd[4837]: Accepted publickey for core from 10.200.16.10 port 32894 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:38:50.338481 sshd-session[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:50.342771 systemd-logind[1711]: New session 13 of user core. Feb 13 19:38:50.345859 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:38:50.763020 sshd[4839]: Connection closed by 10.200.16.10 port 32894 Feb 13 19:38:50.762825 sshd-session[4837]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:50.766823 systemd[1]: sshd@10-10.200.20.38:22-10.200.16.10:32894.service: Deactivated successfully. Feb 13 19:38:50.768350 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:38:50.769614 systemd-logind[1711]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:38:50.770574 systemd-logind[1711]: Removed session 13. Feb 13 19:38:50.854019 systemd[1]: Started sshd@11-10.200.20.38:22-10.200.16.10:32902.service - OpenSSH per-connection server daemon (10.200.16.10:32902). Feb 13 19:38:51.319913 sshd[4851]: Accepted publickey for core from 10.200.16.10 port 32902 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:38:51.321224 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:51.325687 systemd-logind[1711]: New session 14 of user core. Feb 13 19:38:51.332877 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:38:51.760610 sshd[4853]: Connection closed by 10.200.16.10 port 32902 Feb 13 19:38:51.761257 sshd-session[4851]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:51.763983 systemd-logind[1711]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:38:51.764143 systemd[1]: sshd@11-10.200.20.38:22-10.200.16.10:32902.service: Deactivated successfully. Feb 13 19:38:51.766246 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:38:51.768418 systemd-logind[1711]: Removed session 14. Feb 13 19:38:51.854022 systemd[1]: Started sshd@12-10.200.20.38:22-10.200.16.10:32906.service - OpenSSH per-connection server daemon (10.200.16.10:32906). Feb 13 19:38:52.327140 sshd[4863]: Accepted publickey for core from 10.200.16.10 port 32906 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:38:52.328430 sshd-session[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:52.332620 systemd-logind[1711]: New session 15 of user core. Feb 13 19:38:52.337941 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:38:52.741381 sshd[4865]: Connection closed by 10.200.16.10 port 32906 Feb 13 19:38:52.741954 sshd-session[4863]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:52.745420 systemd-logind[1711]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:38:52.745421 systemd[1]: sshd@12-10.200.20.38:22-10.200.16.10:32906.service: Deactivated successfully. Feb 13 19:38:52.747460 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:38:52.748908 systemd-logind[1711]: Removed session 15. Feb 13 19:38:57.827962 systemd[1]: Started sshd@13-10.200.20.38:22-10.200.16.10:32908.service - OpenSSH per-connection server daemon (10.200.16.10:32908). Feb 13 19:38:58.294071 sshd[4876]: Accepted publickey for core from 10.200.16.10 port 32908 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:38:58.295491 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:58.300026 systemd-logind[1711]: New session 16 of user core. Feb 13 19:38:58.304880 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:38:58.703082 sshd[4878]: Connection closed by 10.200.16.10 port 32908 Feb 13 19:38:58.701999 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:58.704790 systemd[1]: sshd@13-10.200.20.38:22-10.200.16.10:32908.service: Deactivated successfully. Feb 13 19:38:58.706779 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:38:58.708582 systemd-logind[1711]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:38:58.710052 systemd-logind[1711]: Removed session 16. Feb 13 19:39:03.790932 systemd[1]: Started sshd@14-10.200.20.38:22-10.200.16.10:55018.service - OpenSSH per-connection server daemon (10.200.16.10:55018). Feb 13 19:39:04.231068 sshd[4891]: Accepted publickey for core from 10.200.16.10 port 55018 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:39:04.232749 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:04.237096 systemd-logind[1711]: New session 17 of user core. Feb 13 19:39:04.244888 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:39:04.608962 sshd[4893]: Connection closed by 10.200.16.10 port 55018 Feb 13 19:39:04.609481 sshd-session[4891]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:04.613207 systemd[1]: sshd@14-10.200.20.38:22-10.200.16.10:55018.service: Deactivated successfully. Feb 13 19:39:04.614841 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:39:04.615542 systemd-logind[1711]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:39:04.616526 systemd-logind[1711]: Removed session 17. Feb 13 19:39:04.693994 systemd[1]: Started sshd@15-10.200.20.38:22-10.200.16.10:55030.service - OpenSSH per-connection server daemon (10.200.16.10:55030). Feb 13 19:39:05.122502 sshd[4904]: Accepted publickey for core from 10.200.16.10 port 55030 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:39:05.123849 sshd-session[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:05.127941 systemd-logind[1711]: New session 18 of user core. Feb 13 19:39:05.138881 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:39:05.545828 sshd[4906]: Connection closed by 10.200.16.10 port 55030 Feb 13 19:39:05.546405 sshd-session[4904]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:05.550532 systemd-logind[1711]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:39:05.551467 systemd[1]: sshd@15-10.200.20.38:22-10.200.16.10:55030.service: Deactivated successfully. Feb 13 19:39:05.554059 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:39:05.555812 systemd-logind[1711]: Removed session 18. Feb 13 19:39:05.637147 systemd[1]: Started sshd@16-10.200.20.38:22-10.200.16.10:55040.service - OpenSSH per-connection server daemon (10.200.16.10:55040). Feb 13 19:39:06.103443 sshd[4916]: Accepted publickey for core from 10.200.16.10 port 55040 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:39:06.104926 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:06.110142 systemd-logind[1711]: New session 19 of user core. Feb 13 19:39:06.116891 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:39:07.881752 sshd[4918]: Connection closed by 10.200.16.10 port 55040 Feb 13 19:39:07.881800 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:07.886138 systemd-logind[1711]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:39:07.886327 systemd[1]: sshd@16-10.200.20.38:22-10.200.16.10:55040.service: Deactivated successfully. Feb 13 19:39:07.889537 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:39:07.889923 systemd[1]: session-19.scope: Consumed 427ms CPU time, 66.4M memory peak. Feb 13 19:39:07.893542 systemd-logind[1711]: Removed session 19. Feb 13 19:39:07.980077 systemd[1]: Started sshd@17-10.200.20.38:22-10.200.16.10:55056.service - OpenSSH per-connection server daemon (10.200.16.10:55056). Feb 13 19:39:08.445568 sshd[4937]: Accepted publickey for core from 10.200.16.10 port 55056 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:39:08.447036 sshd-session[4937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:08.452146 systemd-logind[1711]: New session 20 of user core. Feb 13 19:39:08.456887 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:39:08.971466 sshd[4939]: Connection closed by 10.200.16.10 port 55056 Feb 13 19:39:08.972069 sshd-session[4937]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:08.975538 systemd[1]: sshd@17-10.200.20.38:22-10.200.16.10:55056.service: Deactivated successfully. Feb 13 19:39:08.977340 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:39:08.978134 systemd-logind[1711]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:39:08.979152 systemd-logind[1711]: Removed session 20. Feb 13 19:39:09.063875 systemd[1]: Started sshd@18-10.200.20.38:22-10.200.16.10:36136.service - OpenSSH per-connection server daemon (10.200.16.10:36136). Feb 13 19:39:09.551753 sshd[4949]: Accepted publickey for core from 10.200.16.10 port 36136 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:39:09.553070 sshd-session[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:09.558317 systemd-logind[1711]: New session 21 of user core. Feb 13 19:39:09.561956 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:39:09.972234 sshd[4951]: Connection closed by 10.200.16.10 port 36136 Feb 13 19:39:09.972806 sshd-session[4949]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:09.976598 systemd-logind[1711]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:39:09.976808 systemd[1]: sshd@18-10.200.20.38:22-10.200.16.10:36136.service: Deactivated successfully. Feb 13 19:39:09.978431 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:39:09.980891 systemd-logind[1711]: Removed session 21. Feb 13 19:39:15.061318 systemd[1]: Started sshd@19-10.200.20.38:22-10.200.16.10:36140.service - OpenSSH per-connection server daemon (10.200.16.10:36140). Feb 13 19:39:15.527117 sshd[4968]: Accepted publickey for core from 10.200.16.10 port 36140 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:39:15.528437 sshd-session[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:15.532635 systemd-logind[1711]: New session 22 of user core. Feb 13 19:39:15.537853 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:39:15.937869 sshd[4970]: Connection closed by 10.200.16.10 port 36140 Feb 13 19:39:15.938682 sshd-session[4968]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:15.944111 systemd-logind[1711]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:39:15.944429 systemd[1]: sshd@19-10.200.20.38:22-10.200.16.10:36140.service: Deactivated successfully. Feb 13 19:39:15.946730 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:39:15.947717 systemd-logind[1711]: Removed session 22. Feb 13 19:39:21.030379 systemd[1]: Started sshd@20-10.200.20.38:22-10.200.16.10:45828.service - OpenSSH per-connection server daemon (10.200.16.10:45828). Feb 13 19:39:21.523956 sshd[4981]: Accepted publickey for core from 10.200.16.10 port 45828 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:39:21.525372 sshd-session[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:21.530691 systemd-logind[1711]: New session 23 of user core. Feb 13 19:39:21.535961 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:39:21.948926 sshd[4983]: Connection closed by 10.200.16.10 port 45828 Feb 13 19:39:21.949500 sshd-session[4981]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:21.953194 systemd[1]: sshd@20-10.200.20.38:22-10.200.16.10:45828.service: Deactivated successfully. Feb 13 19:39:21.954956 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:39:21.955763 systemd-logind[1711]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:39:21.956931 systemd-logind[1711]: Removed session 23. Feb 13 19:39:27.040946 systemd[1]: Started sshd@21-10.200.20.38:22-10.200.16.10:45832.service - OpenSSH per-connection server daemon (10.200.16.10:45832). Feb 13 19:39:27.523459 sshd[4996]: Accepted publickey for core from 10.200.16.10 port 45832 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:39:27.524958 sshd-session[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:27.528994 systemd-logind[1711]: New session 24 of user core. Feb 13 19:39:27.537905 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:39:27.946954 sshd[4998]: Connection closed by 10.200.16.10 port 45832 Feb 13 19:39:27.947533 sshd-session[4996]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:27.951000 systemd[1]: sshd@21-10.200.20.38:22-10.200.16.10:45832.service: Deactivated successfully. Feb 13 19:39:27.953561 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:39:27.954419 systemd-logind[1711]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:39:27.955896 systemd-logind[1711]: Removed session 24. Feb 13 19:39:28.037349 systemd[1]: Started sshd@22-10.200.20.38:22-10.200.16.10:45838.service - OpenSSH per-connection server daemon (10.200.16.10:45838). Feb 13 19:39:28.483356 sshd[5009]: Accepted publickey for core from 10.200.16.10 port 45838 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:39:28.484821 sshd-session[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:28.489172 systemd-logind[1711]: New session 25 of user core. Feb 13 19:39:28.492896 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:39:30.699127 containerd[1773]: time="2025-02-13T19:39:30.699084788Z" level=info msg="StopContainer for \"22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076\" with timeout 30 (s)" Feb 13 19:39:30.700421 containerd[1773]: time="2025-02-13T19:39:30.700186989Z" level=info msg="Stop container \"22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076\" with signal terminated" Feb 13 19:39:30.713115 systemd[1]: cri-containerd-22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076.scope: Deactivated successfully. Feb 13 19:39:30.719645 containerd[1773]: time="2025-02-13T19:39:30.719595892Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:39:30.728228 containerd[1773]: time="2025-02-13T19:39:30.728097342Z" level=info msg="StopContainer for \"029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0\" with timeout 2 (s)" Feb 13 19:39:30.728584 containerd[1773]: time="2025-02-13T19:39:30.728306902Z" level=info msg="Stop container \"029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0\" with signal terminated" Feb 13 19:39:30.737376 systemd-networkd[1436]: lxc_health: Link DOWN Feb 13 19:39:30.737384 systemd-networkd[1436]: lxc_health: Lost carrier Feb 13 19:39:30.743369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076-rootfs.mount: Deactivated successfully. Feb 13 19:39:30.754196 systemd[1]: cri-containerd-029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0.scope: Deactivated successfully. Feb 13 19:39:30.754934 systemd[1]: cri-containerd-029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0.scope: Consumed 7.026s CPU time, 126.4M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 19:39:30.777219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0-rootfs.mount: Deactivated successfully. Feb 13 19:39:30.810851 containerd[1773]: time="2025-02-13T19:39:30.810638679Z" level=info msg="shim disconnected" id=029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0 namespace=k8s.io Feb 13 19:39:30.810851 containerd[1773]: time="2025-02-13T19:39:30.810844760Z" level=warning msg="cleaning up after shim disconnected" id=029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0 namespace=k8s.io Feb 13 19:39:30.810851 containerd[1773]: time="2025-02-13T19:39:30.810854440Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:39:30.810851 containerd[1773]: time="2025-02-13T19:39:30.810763759Z" level=info msg="shim disconnected" id=22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076 namespace=k8s.io Feb 13 19:39:30.810851 containerd[1773]: time="2025-02-13T19:39:30.810908080Z" level=warning msg="cleaning up after shim disconnected" id=22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076 namespace=k8s.io Feb 13 19:39:30.810851 containerd[1773]: time="2025-02-13T19:39:30.810916880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:39:30.823110 containerd[1773]: time="2025-02-13T19:39:30.823052534Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:39:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:39:30.830997 containerd[1773]: time="2025-02-13T19:39:30.830426863Z" level=info msg="StopContainer for \"22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076\" returns successfully" Feb 13 19:39:30.831488 containerd[1773]: time="2025-02-13T19:39:30.831208624Z" level=info msg="StopPodSandbox for \"81b6a5263cd583cf9ac0d49c2c9ed070eaaa81ed5147e0bee6e9fa021a98ec54\"" Feb 13 19:39:30.831488 containerd[1773]: time="2025-02-13T19:39:30.831260504Z" level=info msg="Container to stop \"22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:39:30.833342 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-81b6a5263cd583cf9ac0d49c2c9ed070eaaa81ed5147e0bee6e9fa021a98ec54-shm.mount: Deactivated successfully. Feb 13 19:39:30.834413 containerd[1773]: time="2025-02-13T19:39:30.833999867Z" level=info msg="StopContainer for \"029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0\" returns successfully" Feb 13 19:39:30.836753 containerd[1773]: time="2025-02-13T19:39:30.834622828Z" level=info msg="StopPodSandbox for \"815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01\"" Feb 13 19:39:30.836753 containerd[1773]: time="2025-02-13T19:39:30.834714748Z" level=info msg="Container to stop \"045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:39:30.836753 containerd[1773]: time="2025-02-13T19:39:30.834730628Z" level=info msg="Container to stop \"c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:39:30.836753 containerd[1773]: time="2025-02-13T19:39:30.834753708Z" level=info msg="Container to stop \"b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:39:30.836753 containerd[1773]: time="2025-02-13T19:39:30.834767108Z" level=info msg="Container to stop \"a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:39:30.836753 containerd[1773]: time="2025-02-13T19:39:30.834776108Z" level=info msg="Container to stop \"029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:39:30.839175 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01-shm.mount: Deactivated successfully. Feb 13 19:39:30.844521 systemd[1]: cri-containerd-81b6a5263cd583cf9ac0d49c2c9ed070eaaa81ed5147e0bee6e9fa021a98ec54.scope: Deactivated successfully. Feb 13 19:39:30.848182 systemd[1]: cri-containerd-815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01.scope: Deactivated successfully. Feb 13 19:39:30.886414 containerd[1773]: time="2025-02-13T19:39:30.886168248Z" level=info msg="shim disconnected" id=81b6a5263cd583cf9ac0d49c2c9ed070eaaa81ed5147e0bee6e9fa021a98ec54 namespace=k8s.io Feb 13 19:39:30.886414 containerd[1773]: time="2025-02-13T19:39:30.886403129Z" level=warning msg="cleaning up after shim disconnected" id=81b6a5263cd583cf9ac0d49c2c9ed070eaaa81ed5147e0bee6e9fa021a98ec54 namespace=k8s.io Feb 13 19:39:30.886414 containerd[1773]: time="2025-02-13T19:39:30.886414089Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:39:30.886904 containerd[1773]: time="2025-02-13T19:39:30.886217008Z" level=info msg="shim disconnected" id=815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01 namespace=k8s.io Feb 13 19:39:30.886904 containerd[1773]: time="2025-02-13T19:39:30.886489769Z" level=warning msg="cleaning up after shim disconnected" id=815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01 namespace=k8s.io Feb 13 19:39:30.886904 containerd[1773]: time="2025-02-13T19:39:30.886496809Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:39:30.900671 containerd[1773]: time="2025-02-13T19:39:30.900595825Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:39:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:39:30.901757 containerd[1773]: time="2025-02-13T19:39:30.901579586Z" level=info msg="TearDown network for sandbox \"81b6a5263cd583cf9ac0d49c2c9ed070eaaa81ed5147e0bee6e9fa021a98ec54\" successfully" Feb 13 19:39:30.901757 containerd[1773]: time="2025-02-13T19:39:30.901606786Z" level=info msg="StopPodSandbox for \"81b6a5263cd583cf9ac0d49c2c9ed070eaaa81ed5147e0bee6e9fa021a98ec54\" returns successfully" Feb 13 19:39:30.901757 containerd[1773]: time="2025-02-13T19:39:30.901658866Z" level=info msg="TearDown network for sandbox \"815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01\" successfully" Feb 13 19:39:30.901757 containerd[1773]: time="2025-02-13T19:39:30.901676427Z" level=info msg="StopPodSandbox for \"815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01\" returns successfully" Feb 13 19:39:30.966490 kubelet[3370]: I0213 19:39:30.966384 3370 scope.go:117] "RemoveContainer" containerID="22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076" Feb 13 19:39:30.971394 containerd[1773]: time="2025-02-13T19:39:30.971079268Z" level=info msg="RemoveContainer for \"22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076\"" Feb 13 19:39:30.993083 containerd[1773]: time="2025-02-13T19:39:30.993036934Z" level=info msg="RemoveContainer for \"22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076\" returns successfully" Feb 13 19:39:30.993467 kubelet[3370]: I0213 19:39:30.993419 3370 scope.go:117] "RemoveContainer" containerID="22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076" Feb 13 19:39:30.994044 containerd[1773]: time="2025-02-13T19:39:30.993683655Z" level=error msg="ContainerStatus for \"22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076\": not found" Feb 13 19:39:30.994124 kubelet[3370]: E0213 19:39:30.993873 3370 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076\": not found" containerID="22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076" Feb 13 19:39:30.994124 kubelet[3370]: I0213 19:39:30.993899 3370 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076"} err="failed to get container status \"22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076\": rpc error: code = NotFound desc = an error occurred when try to find container \"22452d8e8b2bb2bdab76e65096508da68f2506d192e0b20448903223f9bd6076\": not found" Feb 13 19:39:30.994124 kubelet[3370]: I0213 19:39:30.993978 3370 scope.go:117] "RemoveContainer" containerID="029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0" Feb 13 19:39:30.995428 containerd[1773]: time="2025-02-13T19:39:30.995400897Z" level=info msg="RemoveContainer for \"029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0\"" Feb 13 19:39:31.007566 containerd[1773]: time="2025-02-13T19:39:31.007522231Z" level=info msg="RemoveContainer for \"029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0\" returns successfully" Feb 13 19:39:31.007963 kubelet[3370]: I0213 19:39:31.007856 3370 scope.go:117] "RemoveContainer" containerID="b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8" Feb 13 19:39:31.009141 containerd[1773]: time="2025-02-13T19:39:31.009106593Z" level=info msg="RemoveContainer for \"b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8\"" Feb 13 19:39:31.024073 containerd[1773]: time="2025-02-13T19:39:31.024036291Z" level=info msg="RemoveContainer for \"b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8\" returns successfully" Feb 13 19:39:31.024354 kubelet[3370]: I0213 19:39:31.024328 3370 scope.go:117] "RemoveContainer" containerID="c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50" Feb 13 19:39:31.025483 containerd[1773]: time="2025-02-13T19:39:31.025444092Z" level=info msg="RemoveContainer for \"c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50\"" Feb 13 19:39:31.035572 containerd[1773]: time="2025-02-13T19:39:31.035526544Z" level=info msg="RemoveContainer for \"c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50\" returns successfully" Feb 13 19:39:31.035914 kubelet[3370]: I0213 19:39:31.035791 3370 scope.go:117] "RemoveContainer" containerID="a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2" Feb 13 19:39:31.036978 containerd[1773]: time="2025-02-13T19:39:31.036950906Z" level=info msg="RemoveContainer for \"a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2\"" Feb 13 19:39:31.046402 containerd[1773]: time="2025-02-13T19:39:31.046357277Z" level=info msg="RemoveContainer for \"a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2\" returns successfully" Feb 13 19:39:31.046628 kubelet[3370]: I0213 19:39:31.046602 3370 scope.go:117] "RemoveContainer" containerID="045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0" Feb 13 19:39:31.048168 containerd[1773]: time="2025-02-13T19:39:31.047883159Z" level=info msg="RemoveContainer for \"045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0\"" Feb 13 19:39:31.056682 containerd[1773]: time="2025-02-13T19:39:31.056639769Z" level=info msg="RemoveContainer for \"045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0\" returns successfully" Feb 13 19:39:31.056912 kubelet[3370]: I0213 19:39:31.056887 3370 scope.go:117] "RemoveContainer" containerID="029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0" Feb 13 19:39:31.057330 containerd[1773]: time="2025-02-13T19:39:31.057255450Z" level=error msg="ContainerStatus for \"029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0\": not found" Feb 13 19:39:31.057429 kubelet[3370]: E0213 19:39:31.057359 3370 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0\": not found" containerID="029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0" Feb 13 19:39:31.057429 kubelet[3370]: I0213 19:39:31.057382 3370 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0"} err="failed to get container status \"029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"029de9b19a7ca1befa657bd49f497bfb05d503e081cd7bcfe5656bf295b834d0\": not found" Feb 13 19:39:31.057429 kubelet[3370]: I0213 19:39:31.057402 3370 scope.go:117] "RemoveContainer" containerID="b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8" Feb 13 19:39:31.058129 containerd[1773]: time="2025-02-13T19:39:31.057680130Z" level=error msg="ContainerStatus for \"b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8\": not found" Feb 13 19:39:31.058129 containerd[1773]: time="2025-02-13T19:39:31.058082411Z" level=error msg="ContainerStatus for \"c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50\": not found" Feb 13 19:39:31.058229 kubelet[3370]: E0213 19:39:31.057883 3370 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8\": not found" containerID="b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8" Feb 13 19:39:31.058229 kubelet[3370]: I0213 19:39:31.057922 3370 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8"} err="failed to get container status \"b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"b62f632baeeda55bf21de3e3f1acd36d7ffb7f7351914bcfe5cf2186a7a192c8\": not found" Feb 13 19:39:31.058229 kubelet[3370]: I0213 19:39:31.057937 3370 scope.go:117] "RemoveContainer" containerID="c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50" Feb 13 19:39:31.058533 kubelet[3370]: E0213 19:39:31.058387 3370 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50\": not found" containerID="c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50" Feb 13 19:39:31.058533 kubelet[3370]: I0213 19:39:31.058448 3370 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50"} err="failed to get container status \"c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50\": rpc error: code = NotFound desc = an error occurred when try to find container \"c0a56cea8a7d0a900092f123a2fe0b914b9beee91637c7c8a3a837cd8d172a50\": not found" Feb 13 19:39:31.058533 kubelet[3370]: I0213 19:39:31.058465 3370 scope.go:117] "RemoveContainer" containerID="a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2" Feb 13 19:39:31.059002 kubelet[3370]: E0213 19:39:31.058848 3370 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2\": not found" containerID="a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2" Feb 13 19:39:31.059002 kubelet[3370]: I0213 19:39:31.058874 3370 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2"} err="failed to get container status \"a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2\": not found" Feb 13 19:39:31.059002 kubelet[3370]: I0213 19:39:31.058892 3370 scope.go:117] "RemoveContainer" containerID="045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0" Feb 13 19:39:31.059068 containerd[1773]: time="2025-02-13T19:39:31.058630491Z" level=error msg="ContainerStatus for \"a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a5ba67ecc457126f2fd392c18f5ce4a12c0f02e27fabd74b94f24de28cb4d1e2\": not found" Feb 13 19:39:31.059299 containerd[1773]: time="2025-02-13T19:39:31.059211052Z" level=error msg="ContainerStatus for \"045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0\": not found" Feb 13 19:39:31.059369 kubelet[3370]: E0213 19:39:31.059328 3370 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0\": not found" containerID="045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0" Feb 13 19:39:31.059369 kubelet[3370]: I0213 19:39:31.059355 3370 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0"} err="failed to get container status \"045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0\": rpc error: code = NotFound desc = an error occurred when try to find container \"045f633609687090306a75764b8986926620f91801d5183ba065ffb2f25c8ae0\": not found" Feb 13 19:39:31.072730 kubelet[3370]: I0213 19:39:31.072709 3370 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fdda414c-4112-42e0-baf1-705943264f7c-hubble-tls\") pod \"fdda414c-4112-42e0-baf1-705943264f7c\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " Feb 13 19:39:31.073378 kubelet[3370]: I0213 19:39:31.072823 3370 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-cni-path\") pod \"fdda414c-4112-42e0-baf1-705943264f7c\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " Feb 13 19:39:31.073378 kubelet[3370]: I0213 19:39:31.072843 3370 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-host-proc-sys-net\") pod \"fdda414c-4112-42e0-baf1-705943264f7c\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " Feb 13 19:39:31.073378 kubelet[3370]: I0213 19:39:31.072858 3370 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-cilium-cgroup\") pod \"fdda414c-4112-42e0-baf1-705943264f7c\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " Feb 13 19:39:31.073378 kubelet[3370]: I0213 19:39:31.072877 3370 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq9f8\" (UniqueName: \"kubernetes.io/projected/e92e1175-5078-4fdc-aaf1-fed1777808fd-kube-api-access-nq9f8\") pod \"e92e1175-5078-4fdc-aaf1-fed1777808fd\" (UID: \"e92e1175-5078-4fdc-aaf1-fed1777808fd\") " Feb 13 19:39:31.073378 kubelet[3370]: I0213 19:39:31.072895 3370 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-cilium-run\") pod \"fdda414c-4112-42e0-baf1-705943264f7c\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " Feb 13 19:39:31.073378 kubelet[3370]: I0213 19:39:31.072911 3370 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e92e1175-5078-4fdc-aaf1-fed1777808fd-cilium-config-path\") pod \"e92e1175-5078-4fdc-aaf1-fed1777808fd\" (UID: \"e92e1175-5078-4fdc-aaf1-fed1777808fd\") " Feb 13 19:39:31.073550 kubelet[3370]: I0213 19:39:31.072926 3370 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-lib-modules\") pod \"fdda414c-4112-42e0-baf1-705943264f7c\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " Feb 13 19:39:31.073550 kubelet[3370]: I0213 19:39:31.072945 3370 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fdda414c-4112-42e0-baf1-705943264f7c-cilium-config-path\") pod \"fdda414c-4112-42e0-baf1-705943264f7c\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " Feb 13 19:39:31.073550 kubelet[3370]: I0213 19:39:31.072959 3370 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-host-proc-sys-kernel\") pod \"fdda414c-4112-42e0-baf1-705943264f7c\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " Feb 13 19:39:31.073550 kubelet[3370]: I0213 19:39:31.072974 3370 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-xtables-lock\") pod \"fdda414c-4112-42e0-baf1-705943264f7c\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " Feb 13 19:39:31.073550 kubelet[3370]: I0213 19:39:31.072989 3370 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-bpf-maps\") pod \"fdda414c-4112-42e0-baf1-705943264f7c\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " Feb 13 19:39:31.073550 kubelet[3370]: I0213 19:39:31.073005 3370 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-etc-cni-netd\") pod \"fdda414c-4112-42e0-baf1-705943264f7c\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " Feb 13 19:39:31.073669 kubelet[3370]: I0213 19:39:31.073022 3370 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fdda414c-4112-42e0-baf1-705943264f7c-clustermesh-secrets\") pod \"fdda414c-4112-42e0-baf1-705943264f7c\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " Feb 13 19:39:31.073669 kubelet[3370]: I0213 19:39:31.073038 3370 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klhz4\" (UniqueName: \"kubernetes.io/projected/fdda414c-4112-42e0-baf1-705943264f7c-kube-api-access-klhz4\") pod \"fdda414c-4112-42e0-baf1-705943264f7c\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " Feb 13 19:39:31.073669 kubelet[3370]: I0213 19:39:31.073055 3370 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-hostproc\") pod \"fdda414c-4112-42e0-baf1-705943264f7c\" (UID: \"fdda414c-4112-42e0-baf1-705943264f7c\") " Feb 13 19:39:31.073669 kubelet[3370]: I0213 19:39:31.073112 3370 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-hostproc" (OuterVolumeSpecName: "hostproc") pod "fdda414c-4112-42e0-baf1-705943264f7c" (UID: "fdda414c-4112-42e0-baf1-705943264f7c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:31.073669 kubelet[3370]: I0213 19:39:31.073146 3370 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-cni-path" (OuterVolumeSpecName: "cni-path") pod "fdda414c-4112-42e0-baf1-705943264f7c" (UID: "fdda414c-4112-42e0-baf1-705943264f7c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:31.073819 kubelet[3370]: I0213 19:39:31.073159 3370 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fdda414c-4112-42e0-baf1-705943264f7c" (UID: "fdda414c-4112-42e0-baf1-705943264f7c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:31.073819 kubelet[3370]: I0213 19:39:31.073172 3370 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fdda414c-4112-42e0-baf1-705943264f7c" (UID: "fdda414c-4112-42e0-baf1-705943264f7c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:31.074929 kubelet[3370]: I0213 19:39:31.074866 3370 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fdda414c-4112-42e0-baf1-705943264f7c" (UID: "fdda414c-4112-42e0-baf1-705943264f7c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:31.075016 kubelet[3370]: I0213 19:39:31.074942 3370 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fdda414c-4112-42e0-baf1-705943264f7c" (UID: "fdda414c-4112-42e0-baf1-705943264f7c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:31.075904 kubelet[3370]: I0213 19:39:31.075875 3370 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fdda414c-4112-42e0-baf1-705943264f7c" (UID: "fdda414c-4112-42e0-baf1-705943264f7c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:31.076219 kubelet[3370]: I0213 19:39:31.076121 3370 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fdda414c-4112-42e0-baf1-705943264f7c" (UID: "fdda414c-4112-42e0-baf1-705943264f7c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:31.076219 kubelet[3370]: I0213 19:39:31.076165 3370 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fdda414c-4112-42e0-baf1-705943264f7c" (UID: "fdda414c-4112-42e0-baf1-705943264f7c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:31.076219 kubelet[3370]: I0213 19:39:31.076178 3370 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fdda414c-4112-42e0-baf1-705943264f7c" (UID: "fdda414c-4112-42e0-baf1-705943264f7c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:31.077814 kubelet[3370]: I0213 19:39:31.077674 3370 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdda414c-4112-42e0-baf1-705943264f7c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fdda414c-4112-42e0-baf1-705943264f7c" (UID: "fdda414c-4112-42e0-baf1-705943264f7c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:39:31.077880 kubelet[3370]: I0213 19:39:31.077823 3370 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e92e1175-5078-4fdc-aaf1-fed1777808fd-kube-api-access-nq9f8" (OuterVolumeSpecName: "kube-api-access-nq9f8") pod "e92e1175-5078-4fdc-aaf1-fed1777808fd" (UID: "e92e1175-5078-4fdc-aaf1-fed1777808fd"). InnerVolumeSpecName "kube-api-access-nq9f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:39:31.078416 kubelet[3370]: I0213 19:39:31.078314 3370 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdda414c-4112-42e0-baf1-705943264f7c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fdda414c-4112-42e0-baf1-705943264f7c" (UID: "fdda414c-4112-42e0-baf1-705943264f7c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:39:31.079188 kubelet[3370]: I0213 19:39:31.079131 3370 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e92e1175-5078-4fdc-aaf1-fed1777808fd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e92e1175-5078-4fdc-aaf1-fed1777808fd" (UID: "e92e1175-5078-4fdc-aaf1-fed1777808fd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:39:31.080384 kubelet[3370]: I0213 19:39:31.080354 3370 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdda414c-4112-42e0-baf1-705943264f7c-kube-api-access-klhz4" (OuterVolumeSpecName: "kube-api-access-klhz4") pod "fdda414c-4112-42e0-baf1-705943264f7c" (UID: "fdda414c-4112-42e0-baf1-705943264f7c"). InnerVolumeSpecName "kube-api-access-klhz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:39:31.080770 kubelet[3370]: I0213 19:39:31.080743 3370 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdda414c-4112-42e0-baf1-705943264f7c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fdda414c-4112-42e0-baf1-705943264f7c" (UID: "fdda414c-4112-42e0-baf1-705943264f7c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:39:31.174032 kubelet[3370]: I0213 19:39:31.173993 3370 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fdda414c-4112-42e0-baf1-705943264f7c-hubble-tls\") on node \"ci-4230.0.1-a-2ba2208742\" DevicePath \"\"" Feb 13 19:39:31.174032 kubelet[3370]: I0213 19:39:31.174026 3370 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-cni-path\") on node \"ci-4230.0.1-a-2ba2208742\" DevicePath \"\"" Feb 13 19:39:31.174032 kubelet[3370]: I0213 19:39:31.174038 3370 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-host-proc-sys-net\") on node \"ci-4230.0.1-a-2ba2208742\" DevicePath \"\"" Feb 13 19:39:31.174032 kubelet[3370]: I0213 19:39:31.174047 3370 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-cilium-cgroup\") on node \"ci-4230.0.1-a-2ba2208742\" DevicePath \"\"" Feb 13 19:39:31.174260 kubelet[3370]: I0213 19:39:31.174057 3370 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nq9f8\" (UniqueName: \"kubernetes.io/projected/e92e1175-5078-4fdc-aaf1-fed1777808fd-kube-api-access-nq9f8\") on node \"ci-4230.0.1-a-2ba2208742\" DevicePath \"\"" Feb 13 19:39:31.174260 kubelet[3370]: I0213 19:39:31.174065 3370 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-cilium-run\") on node \"ci-4230.0.1-a-2ba2208742\" DevicePath \"\"" Feb 13 19:39:31.174260 kubelet[3370]: I0213 19:39:31.174075 3370 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e92e1175-5078-4fdc-aaf1-fed1777808fd-cilium-config-path\") on node \"ci-4230.0.1-a-2ba2208742\" DevicePath \"\"" Feb 13 19:39:31.174260 kubelet[3370]: I0213 19:39:31.174082 3370 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-lib-modules\") on node \"ci-4230.0.1-a-2ba2208742\" DevicePath \"\"" Feb 13 19:39:31.174260 kubelet[3370]: I0213 19:39:31.174092 3370 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fdda414c-4112-42e0-baf1-705943264f7c-cilium-config-path\") on node \"ci-4230.0.1-a-2ba2208742\" DevicePath \"\"" Feb 13 19:39:31.174260 kubelet[3370]: I0213 19:39:31.174101 3370 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-host-proc-sys-kernel\") on node \"ci-4230.0.1-a-2ba2208742\" DevicePath \"\"" Feb 13 19:39:31.174260 kubelet[3370]: I0213 19:39:31.174110 3370 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-xtables-lock\") on node \"ci-4230.0.1-a-2ba2208742\" DevicePath \"\"" Feb 13 19:39:31.174260 kubelet[3370]: I0213 19:39:31.174117 3370 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-bpf-maps\") on node \"ci-4230.0.1-a-2ba2208742\" DevicePath \"\"" Feb 13 19:39:31.174419 kubelet[3370]: I0213 19:39:31.174125 3370 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-etc-cni-netd\") on node \"ci-4230.0.1-a-2ba2208742\" DevicePath \"\"" Feb 13 19:39:31.174419 kubelet[3370]: I0213 19:39:31.174133 3370 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fdda414c-4112-42e0-baf1-705943264f7c-clustermesh-secrets\") on node \"ci-4230.0.1-a-2ba2208742\" DevicePath \"\"" Feb 13 19:39:31.174419 kubelet[3370]: I0213 19:39:31.174141 3370 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-klhz4\" (UniqueName: \"kubernetes.io/projected/fdda414c-4112-42e0-baf1-705943264f7c-kube-api-access-klhz4\") on node \"ci-4230.0.1-a-2ba2208742\" DevicePath \"\"" Feb 13 19:39:31.174419 kubelet[3370]: I0213 19:39:31.174149 3370 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fdda414c-4112-42e0-baf1-705943264f7c-hostproc\") on node \"ci-4230.0.1-a-2ba2208742\" DevicePath \"\"" Feb 13 19:39:31.271504 systemd[1]: Removed slice kubepods-besteffort-pode92e1175_5078_4fdc_aaf1_fed1777808fd.slice - libcontainer container kubepods-besteffort-pode92e1175_5078_4fdc_aaf1_fed1777808fd.slice. Feb 13 19:39:31.278030 systemd[1]: Removed slice kubepods-burstable-podfdda414c_4112_42e0_baf1_705943264f7c.slice - libcontainer container kubepods-burstable-podfdda414c_4112_42e0_baf1_705943264f7c.slice. Feb 13 19:39:31.278386 systemd[1]: kubepods-burstable-podfdda414c_4112_42e0_baf1_705943264f7c.slice: Consumed 7.101s CPU time, 126.8M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 19:39:31.696658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81b6a5263cd583cf9ac0d49c2c9ed070eaaa81ed5147e0bee6e9fa021a98ec54-rootfs.mount: Deactivated successfully. Feb 13 19:39:31.697028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-815397179f0981ae715ca576365de09d75eaba462be8500d5bd26ea2b969ce01-rootfs.mount: Deactivated successfully. Feb 13 19:39:31.697084 systemd[1]: var-lib-kubelet-pods-e92e1175\x2d5078\x2d4fdc\x2daaf1\x2dfed1777808fd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnq9f8.mount: Deactivated successfully. Feb 13 19:39:31.697141 systemd[1]: var-lib-kubelet-pods-fdda414c\x2d4112\x2d42e0\x2dbaf1\x2d705943264f7c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dklhz4.mount: Deactivated successfully. Feb 13 19:39:31.697190 systemd[1]: var-lib-kubelet-pods-fdda414c\x2d4112\x2d42e0\x2dbaf1\x2d705943264f7c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:39:31.697241 systemd[1]: var-lib-kubelet-pods-fdda414c\x2d4112\x2d42e0\x2dbaf1\x2d705943264f7c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:39:32.448952 kubelet[3370]: E0213 19:39:32.448907 3370 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:39:32.715460 sshd[5011]: Connection closed by 10.200.16.10 port 45838 Feb 13 19:39:32.715347 sshd-session[5009]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:32.719494 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:39:32.719728 systemd[1]: session-25.scope: Consumed 1.324s CPU time, 23.5M memory peak. Feb 13 19:39:32.720913 systemd[1]: sshd@22-10.200.20.38:22-10.200.16.10:45838.service: Deactivated successfully. Feb 13 19:39:32.723273 systemd-logind[1711]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:39:32.724493 systemd-logind[1711]: Removed session 25. Feb 13 19:39:32.795995 systemd[1]: Started sshd@23-10.200.20.38:22-10.200.16.10:54250.service - OpenSSH per-connection server daemon (10.200.16.10:54250). Feb 13 19:39:33.142944 kubelet[3370]: I0213 19:39:33.142234 3370 setters.go:600] "Node became not ready" node="ci-4230.0.1-a-2ba2208742" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:39:33Z","lastTransitionTime":"2025-02-13T19:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:39:33.225755 sshd[5172]: Accepted publickey for core from 10.200.16.10 port 54250 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:39:33.227173 sshd-session[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:33.232283 systemd-logind[1711]: New session 26 of user core. Feb 13 19:39:33.241875 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:39:33.296336 kubelet[3370]: I0213 19:39:33.296288 3370 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e92e1175-5078-4fdc-aaf1-fed1777808fd" path="/var/lib/kubelet/pods/e92e1175-5078-4fdc-aaf1-fed1777808fd/volumes" Feb 13 19:39:33.296683 kubelet[3370]: I0213 19:39:33.296660 3370 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdda414c-4112-42e0-baf1-705943264f7c" path="/var/lib/kubelet/pods/fdda414c-4112-42e0-baf1-705943264f7c/volumes" Feb 13 19:39:35.518500 kubelet[3370]: E0213 19:39:35.518425 3370 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fdda414c-4112-42e0-baf1-705943264f7c" containerName="apply-sysctl-overwrites" Feb 13 19:39:35.518500 kubelet[3370]: E0213 19:39:35.518459 3370 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e92e1175-5078-4fdc-aaf1-fed1777808fd" containerName="cilium-operator" Feb 13 19:39:35.518500 kubelet[3370]: E0213 19:39:35.518466 3370 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fdda414c-4112-42e0-baf1-705943264f7c" containerName="cilium-agent" Feb 13 19:39:35.518500 kubelet[3370]: E0213 19:39:35.518472 3370 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fdda414c-4112-42e0-baf1-705943264f7c" containerName="mount-cgroup" Feb 13 19:39:35.518500 kubelet[3370]: E0213 19:39:35.518478 3370 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fdda414c-4112-42e0-baf1-705943264f7c" containerName="mount-bpf-fs" Feb 13 19:39:35.520733 kubelet[3370]: E0213 19:39:35.519018 3370 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fdda414c-4112-42e0-baf1-705943264f7c" containerName="clean-cilium-state" Feb 13 19:39:35.520733 kubelet[3370]: I0213 19:39:35.519065 3370 memory_manager.go:354] "RemoveStaleState removing state" podUID="e92e1175-5078-4fdc-aaf1-fed1777808fd" containerName="cilium-operator" Feb 13 19:39:35.520733 kubelet[3370]: I0213 19:39:35.519072 3370 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdda414c-4112-42e0-baf1-705943264f7c" containerName="cilium-agent" Feb 13 19:39:35.529809 systemd[1]: Created slice kubepods-burstable-podf722ee7e_6764_4536_8f44_418c95d9bdd0.slice - libcontainer container kubepods-burstable-podf722ee7e_6764_4536_8f44_418c95d9bdd0.slice. Feb 13 19:39:35.566940 sshd[5174]: Connection closed by 10.200.16.10 port 54250 Feb 13 19:39:35.567923 sshd-session[5172]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:35.573048 systemd[1]: sshd@23-10.200.20.38:22-10.200.16.10:54250.service: Deactivated successfully. Feb 13 19:39:35.578686 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:39:35.579435 systemd[1]: session-26.scope: Consumed 1.948s CPU time, 25.6M memory peak. Feb 13 19:39:35.581574 systemd-logind[1711]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:39:35.583398 systemd-logind[1711]: Removed session 26. Feb 13 19:39:35.601586 kubelet[3370]: I0213 19:39:35.601550 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f722ee7e-6764-4536-8f44-418c95d9bdd0-cilium-run\") pod \"cilium-9tjs2\" (UID: \"f722ee7e-6764-4536-8f44-418c95d9bdd0\") " pod="kube-system/cilium-9tjs2" Feb 13 19:39:35.601907 kubelet[3370]: I0213 19:39:35.601781 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f722ee7e-6764-4536-8f44-418c95d9bdd0-cilium-config-path\") pod \"cilium-9tjs2\" (UID: \"f722ee7e-6764-4536-8f44-418c95d9bdd0\") " pod="kube-system/cilium-9tjs2" Feb 13 19:39:35.601907 kubelet[3370]: I0213 19:39:35.601810 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f722ee7e-6764-4536-8f44-418c95d9bdd0-cni-path\") pod \"cilium-9tjs2\" (UID: \"f722ee7e-6764-4536-8f44-418c95d9bdd0\") " pod="kube-system/cilium-9tjs2" Feb 13 19:39:35.601907 kubelet[3370]: I0213 19:39:35.601841 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f722ee7e-6764-4536-8f44-418c95d9bdd0-lib-modules\") pod \"cilium-9tjs2\" (UID: \"f722ee7e-6764-4536-8f44-418c95d9bdd0\") " pod="kube-system/cilium-9tjs2" Feb 13 19:39:35.601907 kubelet[3370]: I0213 19:39:35.601859 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f722ee7e-6764-4536-8f44-418c95d9bdd0-xtables-lock\") pod \"cilium-9tjs2\" (UID: \"f722ee7e-6764-4536-8f44-418c95d9bdd0\") " pod="kube-system/cilium-9tjs2" Feb 13 19:39:35.601907 kubelet[3370]: I0213 19:39:35.601874 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f722ee7e-6764-4536-8f44-418c95d9bdd0-cilium-ipsec-secrets\") pod \"cilium-9tjs2\" (UID: \"f722ee7e-6764-4536-8f44-418c95d9bdd0\") " pod="kube-system/cilium-9tjs2" Feb 13 19:39:35.601907 kubelet[3370]: I0213 19:39:35.601890 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f722ee7e-6764-4536-8f44-418c95d9bdd0-bpf-maps\") pod \"cilium-9tjs2\" (UID: \"f722ee7e-6764-4536-8f44-418c95d9bdd0\") " pod="kube-system/cilium-9tjs2" Feb 13 19:39:35.602198 kubelet[3370]: I0213 19:39:35.602100 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f722ee7e-6764-4536-8f44-418c95d9bdd0-hostproc\") pod \"cilium-9tjs2\" (UID: \"f722ee7e-6764-4536-8f44-418c95d9bdd0\") " pod="kube-system/cilium-9tjs2" Feb 13 19:39:35.602198 kubelet[3370]: I0213 19:39:35.602124 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f722ee7e-6764-4536-8f44-418c95d9bdd0-hubble-tls\") pod \"cilium-9tjs2\" (UID: \"f722ee7e-6764-4536-8f44-418c95d9bdd0\") " pod="kube-system/cilium-9tjs2" Feb 13 19:39:35.602198 kubelet[3370]: I0213 19:39:35.602140 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k5rt\" (UniqueName: \"kubernetes.io/projected/f722ee7e-6764-4536-8f44-418c95d9bdd0-kube-api-access-2k5rt\") pod \"cilium-9tjs2\" (UID: \"f722ee7e-6764-4536-8f44-418c95d9bdd0\") " pod="kube-system/cilium-9tjs2" Feb 13 19:39:35.602198 kubelet[3370]: I0213 19:39:35.602173 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f722ee7e-6764-4536-8f44-418c95d9bdd0-host-proc-sys-net\") pod \"cilium-9tjs2\" (UID: \"f722ee7e-6764-4536-8f44-418c95d9bdd0\") " pod="kube-system/cilium-9tjs2" Feb 13 19:39:35.602425 kubelet[3370]: I0213 19:39:35.602207 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f722ee7e-6764-4536-8f44-418c95d9bdd0-cilium-cgroup\") pod \"cilium-9tjs2\" (UID: \"f722ee7e-6764-4536-8f44-418c95d9bdd0\") " pod="kube-system/cilium-9tjs2" Feb 13 19:39:35.602425 kubelet[3370]: I0213 19:39:35.602237 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f722ee7e-6764-4536-8f44-418c95d9bdd0-etc-cni-netd\") pod \"cilium-9tjs2\" (UID: \"f722ee7e-6764-4536-8f44-418c95d9bdd0\") " pod="kube-system/cilium-9tjs2" Feb 13 19:39:35.602425 kubelet[3370]: I0213 19:39:35.602266 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f722ee7e-6764-4536-8f44-418c95d9bdd0-clustermesh-secrets\") pod \"cilium-9tjs2\" (UID: \"f722ee7e-6764-4536-8f44-418c95d9bdd0\") " pod="kube-system/cilium-9tjs2" Feb 13 19:39:35.602425 kubelet[3370]: I0213 19:39:35.602301 3370 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f722ee7e-6764-4536-8f44-418c95d9bdd0-host-proc-sys-kernel\") pod \"cilium-9tjs2\" (UID: \"f722ee7e-6764-4536-8f44-418c95d9bdd0\") " pod="kube-system/cilium-9tjs2" Feb 13 19:39:35.656990 systemd[1]: Started sshd@24-10.200.20.38:22-10.200.16.10:54264.service - OpenSSH per-connection server daemon (10.200.16.10:54264). Feb 13 19:39:35.836659 containerd[1773]: time="2025-02-13T19:39:35.836020838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9tjs2,Uid:f722ee7e-6764-4536-8f44-418c95d9bdd0,Namespace:kube-system,Attempt:0,}" Feb 13 19:39:35.877342 containerd[1773]: time="2025-02-13T19:39:35.877107486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:39:35.877342 containerd[1773]: time="2025-02-13T19:39:35.877175366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:39:35.877342 containerd[1773]: time="2025-02-13T19:39:35.877191686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:39:35.877342 containerd[1773]: time="2025-02-13T19:39:35.877273567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:39:35.893919 systemd[1]: Started cri-containerd-7dafe84a794441a8b5fae7b1d00a2dffc55884c1677247b892acaba95d541038.scope - libcontainer container 7dafe84a794441a8b5fae7b1d00a2dffc55884c1677247b892acaba95d541038. Feb 13 19:39:35.915288 containerd[1773]: time="2025-02-13T19:39:35.915226931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9tjs2,Uid:f722ee7e-6764-4536-8f44-418c95d9bdd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7dafe84a794441a8b5fae7b1d00a2dffc55884c1677247b892acaba95d541038\"" Feb 13 19:39:35.919600 containerd[1773]: time="2025-02-13T19:39:35.919341736Z" level=info msg="CreateContainer within sandbox \"7dafe84a794441a8b5fae7b1d00a2dffc55884c1677247b892acaba95d541038\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:39:35.956410 containerd[1773]: time="2025-02-13T19:39:35.956361580Z" level=info msg="CreateContainer within sandbox \"7dafe84a794441a8b5fae7b1d00a2dffc55884c1677247b892acaba95d541038\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"77fe497b38c52ce486f8dba9e4d128e19f23b489bebb71f372200935cdfdb03b\"" Feb 13 19:39:35.957851 containerd[1773]: time="2025-02-13T19:39:35.957479141Z" level=info msg="StartContainer for \"77fe497b38c52ce486f8dba9e4d128e19f23b489bebb71f372200935cdfdb03b\"" Feb 13 19:39:35.981891 systemd[1]: Started cri-containerd-77fe497b38c52ce486f8dba9e4d128e19f23b489bebb71f372200935cdfdb03b.scope - libcontainer container 77fe497b38c52ce486f8dba9e4d128e19f23b489bebb71f372200935cdfdb03b. Feb 13 19:39:36.011317 containerd[1773]: time="2025-02-13T19:39:36.011271844Z" level=info msg="StartContainer for \"77fe497b38c52ce486f8dba9e4d128e19f23b489bebb71f372200935cdfdb03b\" returns successfully" Feb 13 19:39:36.016990 systemd[1]: cri-containerd-77fe497b38c52ce486f8dba9e4d128e19f23b489bebb71f372200935cdfdb03b.scope: Deactivated successfully. Feb 13 19:39:36.087936 containerd[1773]: time="2025-02-13T19:39:36.087764894Z" level=info msg="shim disconnected" id=77fe497b38c52ce486f8dba9e4d128e19f23b489bebb71f372200935cdfdb03b namespace=k8s.io Feb 13 19:39:36.087936 containerd[1773]: time="2025-02-13T19:39:36.087849975Z" level=warning msg="cleaning up after shim disconnected" id=77fe497b38c52ce486f8dba9e4d128e19f23b489bebb71f372200935cdfdb03b namespace=k8s.io Feb 13 19:39:36.087936 containerd[1773]: time="2025-02-13T19:39:36.087859415Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:39:36.099677 containerd[1773]: time="2025-02-13T19:39:36.099618148Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:39:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:39:36.103237 sshd[5185]: Accepted publickey for core from 10.200.16.10 port 54264 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:39:36.104182 sshd-session[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:36.109358 systemd-logind[1711]: New session 27 of user core. Feb 13 19:39:36.111892 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:39:36.428600 sshd[5298]: Connection closed by 10.200.16.10 port 54264 Feb 13 19:39:36.428905 sshd-session[5185]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:36.431679 systemd-logind[1711]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:39:36.432030 systemd[1]: sshd@24-10.200.20.38:22-10.200.16.10:54264.service: Deactivated successfully. Feb 13 19:39:36.434751 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:39:36.436635 systemd-logind[1711]: Removed session 27. Feb 13 19:39:36.513009 systemd[1]: Started sshd@25-10.200.20.38:22-10.200.16.10:54280.service - OpenSSH per-connection server daemon (10.200.16.10:54280). Feb 13 19:39:36.984261 sshd[5305]: Accepted publickey for core from 10.200.16.10 port 54280 ssh2: RSA SHA256:29NR7rgxHKa+JecffAxU3Lc+gR1zN/QsHTnQIC71aPo Feb 13 19:39:36.985649 sshd-session[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:36.989684 systemd-logind[1711]: New session 28 of user core. Feb 13 19:39:36.995936 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:39:37.001355 containerd[1773]: time="2025-02-13T19:39:37.001056650Z" level=info msg="CreateContainer within sandbox \"7dafe84a794441a8b5fae7b1d00a2dffc55884c1677247b892acaba95d541038\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:39:37.045439 containerd[1773]: time="2025-02-13T19:39:37.045364982Z" level=info msg="CreateContainer within sandbox \"7dafe84a794441a8b5fae7b1d00a2dffc55884c1677247b892acaba95d541038\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ef4d345ee7dede306eb8e09fd0390b6408955739a8225818ad0f6122e1e35089\"" Feb 13 19:39:37.047337 containerd[1773]: time="2025-02-13T19:39:37.046447824Z" level=info msg="StartContainer for \"ef4d345ee7dede306eb8e09fd0390b6408955739a8225818ad0f6122e1e35089\"" Feb 13 19:39:37.078899 systemd[1]: Started cri-containerd-ef4d345ee7dede306eb8e09fd0390b6408955739a8225818ad0f6122e1e35089.scope - libcontainer container ef4d345ee7dede306eb8e09fd0390b6408955739a8225818ad0f6122e1e35089. Feb 13 19:39:37.107467 containerd[1773]: time="2025-02-13T19:39:37.107419695Z" level=info msg="StartContainer for \"ef4d345ee7dede306eb8e09fd0390b6408955739a8225818ad0f6122e1e35089\" returns successfully" Feb 13 19:39:37.110123 systemd[1]: cri-containerd-ef4d345ee7dede306eb8e09fd0390b6408955739a8225818ad0f6122e1e35089.scope: Deactivated successfully. Feb 13 19:39:37.145903 containerd[1773]: time="2025-02-13T19:39:37.145773381Z" level=info msg="shim disconnected" id=ef4d345ee7dede306eb8e09fd0390b6408955739a8225818ad0f6122e1e35089 namespace=k8s.io Feb 13 19:39:37.145903 containerd[1773]: time="2025-02-13T19:39:37.145852901Z" level=warning msg="cleaning up after shim disconnected" id=ef4d345ee7dede306eb8e09fd0390b6408955739a8225818ad0f6122e1e35089 namespace=k8s.io Feb 13 19:39:37.145903 containerd[1773]: time="2025-02-13T19:39:37.145862421Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:39:37.450648 kubelet[3370]: E0213 19:39:37.450584 3370 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:39:37.708156 systemd[1]: run-containerd-runc-k8s.io-ef4d345ee7dede306eb8e09fd0390b6408955739a8225818ad0f6122e1e35089-runc.zN8RAu.mount: Deactivated successfully. Feb 13 19:39:37.708247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef4d345ee7dede306eb8e09fd0390b6408955739a8225818ad0f6122e1e35089-rootfs.mount: Deactivated successfully. Feb 13 19:39:38.002928 containerd[1773]: time="2025-02-13T19:39:38.002793550Z" level=info msg="CreateContainer within sandbox \"7dafe84a794441a8b5fae7b1d00a2dffc55884c1677247b892acaba95d541038\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:39:38.043508 containerd[1773]: time="2025-02-13T19:39:38.043423636Z" level=info msg="CreateContainer within sandbox \"7dafe84a794441a8b5fae7b1d00a2dffc55884c1677247b892acaba95d541038\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fbf2adf9545c10de6f82e87f73286162f38734430de22fe294c843e7fbc0eb61\"" Feb 13 19:39:38.045036 containerd[1773]: time="2025-02-13T19:39:38.044995117Z" level=info msg="StartContainer for \"fbf2adf9545c10de6f82e87f73286162f38734430de22fe294c843e7fbc0eb61\"" Feb 13 19:39:38.075954 systemd[1]: Started cri-containerd-fbf2adf9545c10de6f82e87f73286162f38734430de22fe294c843e7fbc0eb61.scope - libcontainer container fbf2adf9545c10de6f82e87f73286162f38734430de22fe294c843e7fbc0eb61. Feb 13 19:39:38.107163 systemd[1]: cri-containerd-fbf2adf9545c10de6f82e87f73286162f38734430de22fe294c843e7fbc0eb61.scope: Deactivated successfully. Feb 13 19:39:38.110573 containerd[1773]: time="2025-02-13T19:39:38.110493352Z" level=info msg="StartContainer for \"fbf2adf9545c10de6f82e87f73286162f38734430de22fe294c843e7fbc0eb61\" returns successfully" Feb 13 19:39:38.139140 containerd[1773]: time="2025-02-13T19:39:38.138834184Z" level=info msg="shim disconnected" id=fbf2adf9545c10de6f82e87f73286162f38734430de22fe294c843e7fbc0eb61 namespace=k8s.io Feb 13 19:39:38.139140 containerd[1773]: time="2025-02-13T19:39:38.138888944Z" level=warning msg="cleaning up after shim disconnected" id=fbf2adf9545c10de6f82e87f73286162f38734430de22fe294c843e7fbc0eb61 namespace=k8s.io Feb 13 19:39:38.139140 containerd[1773]: time="2025-02-13T19:39:38.138897184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:39:38.708226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbf2adf9545c10de6f82e87f73286162f38734430de22fe294c843e7fbc0eb61-rootfs.mount: Deactivated successfully. Feb 13 19:39:39.006244 containerd[1773]: time="2025-02-13T19:39:39.005895364Z" level=info msg="CreateContainer within sandbox \"7dafe84a794441a8b5fae7b1d00a2dffc55884c1677247b892acaba95d541038\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:39:39.049370 containerd[1773]: time="2025-02-13T19:39:39.049282693Z" level=info msg="CreateContainer within sandbox \"7dafe84a794441a8b5fae7b1d00a2dffc55884c1677247b892acaba95d541038\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"91e19f22d5b3c1da4741ca78dbe5192304909928421480214db6a4b480939650\"" Feb 13 19:39:39.050662 containerd[1773]: time="2025-02-13T19:39:39.049829653Z" level=info msg="StartContainer for \"91e19f22d5b3c1da4741ca78dbe5192304909928421480214db6a4b480939650\"" Feb 13 19:39:39.076877 systemd[1]: Started cri-containerd-91e19f22d5b3c1da4741ca78dbe5192304909928421480214db6a4b480939650.scope - libcontainer container 91e19f22d5b3c1da4741ca78dbe5192304909928421480214db6a4b480939650. Feb 13 19:39:39.098210 systemd[1]: cri-containerd-91e19f22d5b3c1da4741ca78dbe5192304909928421480214db6a4b480939650.scope: Deactivated successfully. Feb 13 19:39:39.102606 containerd[1773]: time="2025-02-13T19:39:39.102403153Z" level=info msg="StartContainer for \"91e19f22d5b3c1da4741ca78dbe5192304909928421480214db6a4b480939650\" returns successfully" Feb 13 19:39:39.119745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91e19f22d5b3c1da4741ca78dbe5192304909928421480214db6a4b480939650-rootfs.mount: Deactivated successfully. Feb 13 19:39:39.131667 containerd[1773]: time="2025-02-13T19:39:39.131522866Z" level=info msg="shim disconnected" id=91e19f22d5b3c1da4741ca78dbe5192304909928421480214db6a4b480939650 namespace=k8s.io Feb 13 19:39:39.131667 containerd[1773]: time="2025-02-13T19:39:39.131659866Z" level=warning msg="cleaning up after shim disconnected" id=91e19f22d5b3c1da4741ca78dbe5192304909928421480214db6a4b480939650 namespace=k8s.io Feb 13 19:39:39.131667 containerd[1773]: time="2025-02-13T19:39:39.131669466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:39:40.008140 containerd[1773]: time="2025-02-13T19:39:40.008091377Z" level=info msg="CreateContainer within sandbox \"7dafe84a794441a8b5fae7b1d00a2dffc55884c1677247b892acaba95d541038\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:39:40.047730 containerd[1773]: time="2025-02-13T19:39:40.047519141Z" level=info msg="CreateContainer within sandbox \"7dafe84a794441a8b5fae7b1d00a2dffc55884c1677247b892acaba95d541038\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"caf3f117900f83ea06d2355c228225b35568ae5e2a9d1998f9d26b6d4d43d3ac\"" Feb 13 19:39:40.048246 containerd[1773]: time="2025-02-13T19:39:40.048049902Z" level=info msg="StartContainer for \"caf3f117900f83ea06d2355c228225b35568ae5e2a9d1998f9d26b6d4d43d3ac\"" Feb 13 19:39:40.077921 systemd[1]: Started cri-containerd-caf3f117900f83ea06d2355c228225b35568ae5e2a9d1998f9d26b6d4d43d3ac.scope - libcontainer container caf3f117900f83ea06d2355c228225b35568ae5e2a9d1998f9d26b6d4d43d3ac. Feb 13 19:39:40.110320 containerd[1773]: time="2025-02-13T19:39:40.110183772Z" level=info msg="StartContainer for \"caf3f117900f83ea06d2355c228225b35568ae5e2a9d1998f9d26b6d4d43d3ac\" returns successfully" Feb 13 19:39:40.664757 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:39:41.028813 kubelet[3370]: I0213 19:39:41.028432 3370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9tjs2" podStartSLOduration=6.02841601 podStartE2EDuration="6.02841601s" podCreationTimestamp="2025-02-13 19:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:39:41.02808521 +0000 UTC m=+333.836466291" watchObservedRunningTime="2025-02-13 19:39:41.02841601 +0000 UTC m=+333.836797011" Feb 13 19:39:43.279157 systemd-networkd[1436]: lxc_health: Link UP Feb 13 19:39:43.300687 systemd-networkd[1436]: lxc_health: Gained carrier Feb 13 19:39:44.447883 systemd-networkd[1436]: lxc_health: Gained IPv6LL Feb 13 19:39:45.811785 systemd[1]: run-containerd-runc-k8s.io-caf3f117900f83ea06d2355c228225b35568ae5e2a9d1998f9d26b6d4d43d3ac-runc.N9e8SP.mount: Deactivated successfully. Feb 13 19:39:50.149345 sshd[5307]: Connection closed by 10.200.16.10 port 54280 Feb 13 19:39:50.149757 sshd-session[5305]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:50.153440 systemd[1]: sshd@25-10.200.20.38:22-10.200.16.10:54280.service: Deactivated successfully. Feb 13 19:39:50.156494 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:39:50.157573 systemd-logind[1711]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:39:50.159401 systemd-logind[1711]: Removed session 28.