Feb 13 15:16:12.286245 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:16:12.286267 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 13:51:50 -00 2025 Feb 13 15:16:12.286276 kernel: KASLR enabled Feb 13 15:16:12.286281 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 13 15:16:12.286288 kernel: printk: bootconsole [pl11] enabled Feb 13 15:16:12.286294 kernel: efi: EFI v2.7 by EDK II Feb 13 15:16:12.286301 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Feb 13 15:16:12.286307 kernel: random: crng init done Feb 13 15:16:12.286313 kernel: secureboot: Secure boot disabled Feb 13 15:16:12.286319 kernel: ACPI: Early table checksum verification disabled Feb 13 15:16:12.286325 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Feb 13 15:16:12.286330 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:16:12.286336 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:16:12.286344 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Feb 13 15:16:12.286351 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:16:12.286357 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:16:12.286364 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:16:12.286371 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:16:12.286377 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:16:12.286383 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:16:12.286390 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 13 15:16:12.286396 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:16:12.286402 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 13 15:16:12.286408 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Feb 13 15:16:12.286414 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Feb 13 15:16:12.286420 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Feb 13 15:16:12.286427 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Feb 13 15:16:12.286433 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Feb 13 15:16:12.286440 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Feb 13 15:16:12.286447 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Feb 13 15:16:12.286453 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Feb 13 15:16:12.286459 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Feb 13 15:16:12.286465 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Feb 13 15:16:12.286471 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Feb 13 15:16:12.286477 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Feb 13 15:16:12.286484 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Feb 13 15:16:12.286490 kernel: Zone ranges: Feb 13 15:16:12.286496 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 13 15:16:12.286502 kernel: DMA32 empty Feb 13 15:16:12.286508 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 15:16:12.286518 kernel: Movable zone start for each node Feb 13 15:16:12.286524 kernel: Early memory node ranges Feb 13 15:16:12.286531 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 13 15:16:12.286538 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Feb 13 15:16:12.286544 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Feb 13 15:16:12.286552 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Feb 13 15:16:12.286559 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Feb 13 15:16:12.286565 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Feb 13 15:16:12.286572 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Feb 13 15:16:12.286579 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Feb 13 15:16:12.286585 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 15:16:12.286591 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 13 15:16:12.286598 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 13 15:16:12.286604 kernel: psci: probing for conduit method from ACPI. Feb 13 15:16:12.286611 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:16:12.293495 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:16:12.293512 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 13 15:16:12.293525 kernel: psci: SMC Calling Convention v1.4 Feb 13 15:16:12.293531 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Feb 13 15:16:12.293538 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Feb 13 15:16:12.293545 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:16:12.293551 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:16:12.293558 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:16:12.293565 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:16:12.293571 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:16:12.293578 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:16:12.293585 kernel: CPU features: detected: Spectre-BHB Feb 13 15:16:12.293591 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:16:12.293600 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:16:12.293607 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:16:12.293613 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 13 15:16:12.293632 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:16:12.293639 kernel: alternatives: applying boot alternatives Feb 13 15:16:12.293647 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:16:12.293655 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:16:12.293661 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:16:12.293668 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:16:12.293675 kernel: Fallback order for Node 0: 0 Feb 13 15:16:12.293681 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 13 15:16:12.293689 kernel: Policy zone: Normal Feb 13 15:16:12.293696 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:16:12.293702 kernel: software IO TLB: area num 2. Feb 13 15:16:12.293709 kernel: software IO TLB: mapped [mem 0x0000000036550000-0x000000003a550000] (64MB) Feb 13 15:16:12.293716 kernel: Memory: 3983656K/4194160K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 210504K reserved, 0K cma-reserved) Feb 13 15:16:12.293722 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:16:12.293729 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:16:12.293736 kernel: rcu: RCU event tracing is enabled. Feb 13 15:16:12.293743 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:16:12.293749 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:16:12.293756 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:16:12.293764 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:16:12.293771 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:16:12.293777 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:16:12.293783 kernel: GICv3: 960 SPIs implemented Feb 13 15:16:12.293790 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:16:12.293796 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:16:12.293803 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:16:12.293809 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 13 15:16:12.293816 kernel: ITS: No ITS available, not enabling LPIs Feb 13 15:16:12.293823 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:16:12.293829 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:16:12.293836 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:16:12.293844 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:16:12.293851 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:16:12.293857 kernel: Console: colour dummy device 80x25 Feb 13 15:16:12.293865 kernel: printk: console [tty1] enabled Feb 13 15:16:12.293871 kernel: ACPI: Core revision 20230628 Feb 13 15:16:12.293878 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:16:12.293885 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:16:12.293892 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:16:12.293899 kernel: landlock: Up and running. Feb 13 15:16:12.293907 kernel: SELinux: Initializing. Feb 13 15:16:12.293914 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:16:12.293921 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:16:12.293928 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:16:12.293935 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:16:12.293942 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 13 15:16:12.293949 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Feb 13 15:16:12.293962 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 15:16:12.293969 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:16:12.293976 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:16:12.293983 kernel: Remapping and enabling EFI services. Feb 13 15:16:12.293990 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:16:12.293999 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:16:12.294006 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 13 15:16:12.294014 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:16:12.294020 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:16:12.294028 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:16:12.294036 kernel: SMP: Total of 2 processors activated. Feb 13 15:16:12.294043 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:16:12.294050 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 13 15:16:12.294058 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:16:12.294065 kernel: CPU features: detected: CRC32 instructions Feb 13 15:16:12.294072 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:16:12.294079 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:16:12.294086 kernel: CPU features: detected: Privileged Access Never Feb 13 15:16:12.294093 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:16:12.294101 kernel: alternatives: applying system-wide alternatives Feb 13 15:16:12.294109 kernel: devtmpfs: initialized Feb 13 15:16:12.294116 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:16:12.294123 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:16:12.294130 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:16:12.294137 kernel: SMBIOS 3.1.0 present. Feb 13 15:16:12.294144 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Feb 13 15:16:12.294152 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:16:12.294159 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:16:12.294168 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:16:12.294175 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:16:12.294182 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:16:12.294189 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Feb 13 15:16:12.294196 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:16:12.294203 kernel: cpuidle: using governor menu Feb 13 15:16:12.294210 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:16:12.294217 kernel: ASID allocator initialised with 32768 entries Feb 13 15:16:12.294224 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:16:12.294233 kernel: Serial: AMBA PL011 UART driver Feb 13 15:16:12.294240 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:16:12.294247 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:16:12.294254 kernel: Modules: 509280 pages in range for PLT usage Feb 13 15:16:12.294261 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:16:12.294269 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:16:12.294276 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:16:12.294283 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:16:12.294290 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:16:12.294299 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:16:12.294306 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:16:12.294313 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:16:12.294320 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:16:12.294327 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:16:12.294334 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:16:12.294341 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:16:12.294348 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:16:12.294355 kernel: ACPI: Interpreter enabled Feb 13 15:16:12.294364 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:16:12.294371 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:16:12.294378 kernel: printk: console [ttyAMA0] enabled Feb 13 15:16:12.294385 kernel: printk: bootconsole [pl11] disabled Feb 13 15:16:12.294392 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 13 15:16:12.294399 kernel: iommu: Default domain type: Translated Feb 13 15:16:12.294406 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:16:12.294414 kernel: efivars: Registered efivars operations Feb 13 15:16:12.294421 kernel: vgaarb: loaded Feb 13 15:16:12.294429 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:16:12.294436 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:16:12.294443 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:16:12.294451 kernel: pnp: PnP ACPI init Feb 13 15:16:12.294458 kernel: pnp: PnP ACPI: found 0 devices Feb 13 15:16:12.294465 kernel: NET: Registered PF_INET protocol family Feb 13 15:16:12.294472 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:16:12.294479 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:16:12.294487 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:16:12.294495 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:16:12.294503 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:16:12.294510 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:16:12.294517 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:16:12.294524 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:16:12.294534 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:16:12.294541 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:16:12.294549 kernel: kvm [1]: HYP mode not available Feb 13 15:16:12.294557 kernel: Initialise system trusted keyrings Feb 13 15:16:12.294567 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:16:12.294575 kernel: Key type asymmetric registered Feb 13 15:16:12.294583 kernel: Asymmetric key parser 'x509' registered Feb 13 15:16:12.294591 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:16:12.294599 kernel: io scheduler mq-deadline registered Feb 13 15:16:12.294608 kernel: io scheduler kyber registered Feb 13 15:16:12.294638 kernel: io scheduler bfq registered Feb 13 15:16:12.294647 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:16:12.294656 kernel: thunder_xcv, ver 1.0 Feb 13 15:16:12.294667 kernel: thunder_bgx, ver 1.0 Feb 13 15:16:12.294675 kernel: nicpf, ver 1.0 Feb 13 15:16:12.294684 kernel: nicvf, ver 1.0 Feb 13 15:16:12.294836 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:16:12.294907 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:16:11 UTC (1739459771) Feb 13 15:16:12.294917 kernel: efifb: probing for efifb Feb 13 15:16:12.294925 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 15:16:12.294932 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 15:16:12.294941 kernel: efifb: scrolling: redraw Feb 13 15:16:12.294948 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:16:12.294955 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:16:12.294963 kernel: fb0: EFI VGA frame buffer device Feb 13 15:16:12.294970 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 13 15:16:12.294977 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:16:12.294984 kernel: No ACPI PMU IRQ for CPU0 Feb 13 15:16:12.294991 kernel: No ACPI PMU IRQ for CPU1 Feb 13 15:16:12.294998 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 13 15:16:12.295006 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:16:12.295013 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:16:12.295020 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:16:12.295027 kernel: Segment Routing with IPv6 Feb 13 15:16:12.295034 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:16:12.295041 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:16:12.295048 kernel: Key type dns_resolver registered Feb 13 15:16:12.295055 kernel: registered taskstats version 1 Feb 13 15:16:12.295062 kernel: Loading compiled-in X.509 certificates Feb 13 15:16:12.295071 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 03c2ececc548f4ae45f50171451f5c036e2757d4' Feb 13 15:16:12.295078 kernel: Key type .fscrypt registered Feb 13 15:16:12.295085 kernel: Key type fscrypt-provisioning registered Feb 13 15:16:12.295092 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:16:12.295099 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:16:12.295106 kernel: ima: No architecture policies found Feb 13 15:16:12.295113 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:16:12.295120 kernel: clk: Disabling unused clocks Feb 13 15:16:12.295127 kernel: Freeing unused kernel memory: 38336K Feb 13 15:16:12.295136 kernel: Run /init as init process Feb 13 15:16:12.295143 kernel: with arguments: Feb 13 15:16:12.295150 kernel: /init Feb 13 15:16:12.295157 kernel: with environment: Feb 13 15:16:12.295164 kernel: HOME=/ Feb 13 15:16:12.295171 kernel: TERM=linux Feb 13 15:16:12.295178 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:16:12.295185 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:16:12.295197 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:16:12.295205 systemd[1]: Detected virtualization microsoft. Feb 13 15:16:12.295213 systemd[1]: Detected architecture arm64. Feb 13 15:16:12.295220 systemd[1]: Running in initrd. Feb 13 15:16:12.295227 systemd[1]: No hostname configured, using default hostname. Feb 13 15:16:12.295235 systemd[1]: Hostname set to <localhost>. Feb 13 15:16:12.295243 systemd[1]: Initializing machine ID from random generator. Feb 13 15:16:12.295250 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:16:12.295259 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:16:12.295267 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:16:12.295275 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:16:12.295283 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:16:12.295291 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:16:12.295299 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:16:12.295308 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:16:12.295317 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:16:12.295325 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:16:12.295333 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:16:12.295341 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:16:12.295348 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:16:12.295356 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:16:12.295363 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:16:12.295371 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:16:12.295380 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:16:12.295388 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:16:12.295395 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:16:12.295403 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:16:12.295411 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:16:12.295419 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:16:12.295426 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:16:12.295434 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:16:12.295441 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:16:12.295451 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:16:12.295458 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:16:12.295466 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:16:12.295474 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:16:12.295498 systemd-journald[218]: Collecting audit messages is disabled. Feb 13 15:16:12.295519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:12.295527 systemd-journald[218]: Journal started Feb 13 15:16:12.295545 systemd-journald[218]: Runtime Journal (/run/log/journal/da69f10926ca4251b51336850d6918aa) is 8M, max 78.5M, 70.5M free. Feb 13 15:16:12.295876 systemd-modules-load[220]: Inserted module 'overlay' Feb 13 15:16:12.311717 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:16:12.317097 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:16:12.336896 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:16:12.359974 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:16:12.359997 kernel: Bridge firewalling registered Feb 13 15:16:12.353484 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:16:12.359223 systemd-modules-load[220]: Inserted module 'br_netfilter' Feb 13 15:16:12.364331 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:16:12.374096 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:12.396839 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:16:12.410523 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:16:12.418779 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:16:12.448942 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:16:12.456742 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:16:12.471919 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:12.484196 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:16:12.497390 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:16:12.520824 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:16:12.528790 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:16:12.549557 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:16:12.563930 dracut-cmdline[252]: dracut-dracut-053 Feb 13 15:16:12.569790 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:16:12.601016 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:16:12.625705 systemd-resolved[254]: Positive Trust Anchors: Feb 13 15:16:12.625718 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:16:12.625750 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:16:12.629037 systemd-resolved[254]: Defaulting to hostname 'linux'. Feb 13 15:16:12.629840 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:16:12.636802 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:16:12.749649 kernel: SCSI subsystem initialized Feb 13 15:16:12.757664 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:16:12.767649 kernel: iscsi: registered transport (tcp) Feb 13 15:16:12.785366 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:16:12.785385 kernel: QLogic iSCSI HBA Driver Feb 13 15:16:12.817122 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:16:12.837761 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:16:12.867117 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:16:12.867151 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:16:12.873310 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:16:12.921635 kernel: raid6: neonx8 gen() 15801 MB/s Feb 13 15:16:12.941628 kernel: raid6: neonx4 gen() 15788 MB/s Feb 13 15:16:12.961627 kernel: raid6: neonx2 gen() 13217 MB/s Feb 13 15:16:12.982628 kernel: raid6: neonx1 gen() 10466 MB/s Feb 13 15:16:13.002627 kernel: raid6: int64x8 gen() 6795 MB/s Feb 13 15:16:13.023632 kernel: raid6: int64x4 gen() 7357 MB/s Feb 13 15:16:13.043627 kernel: raid6: int64x2 gen() 6117 MB/s Feb 13 15:16:13.066764 kernel: raid6: int64x1 gen() 5062 MB/s Feb 13 15:16:13.066775 kernel: raid6: using algorithm neonx8 gen() 15801 MB/s Feb 13 15:16:13.091699 kernel: raid6: .... xor() 12003 MB/s, rmw enabled Feb 13 15:16:13.091715 kernel: raid6: using neon recovery algorithm Feb 13 15:16:13.102513 kernel: xor: measuring software checksum speed Feb 13 15:16:13.102529 kernel: 8regs : 21664 MB/sec Feb 13 15:16:13.105898 kernel: 32regs : 21687 MB/sec Feb 13 15:16:13.109206 kernel: arm64_neon : 27993 MB/sec Feb 13 15:16:13.113135 kernel: xor: using function: arm64_neon (27993 MB/sec) Feb 13 15:16:13.162638 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:16:13.171792 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:16:13.191821 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:16:13.211917 systemd-udevd[439]: Using default interface naming scheme 'v255'. Feb 13 15:16:13.217073 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:16:13.234729 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:16:13.258197 dracut-pre-trigger[453]: rd.md=0: removing MD RAID activation Feb 13 15:16:13.289429 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:16:13.306926 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:16:13.342337 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:16:13.363788 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:16:13.386727 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:16:13.401646 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:16:13.415304 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:16:13.428067 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:16:13.451648 kernel: hv_vmbus: Vmbus version:5.3 Feb 13 15:16:13.453761 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:16:13.473266 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:16:13.506340 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 15:16:13.506373 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 15:16:13.506387 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Feb 13 15:16:13.506405 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Feb 13 15:16:13.506414 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 15:16:13.507559 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:16:13.530787 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 15:16:13.530948 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 15:16:13.507855 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:13.552774 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:16:13.581116 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Feb 13 15:16:13.581139 kernel: PTP clock support registered Feb 13 15:16:13.581149 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 15:16:13.581158 kernel: scsi host0: storvsc_host_t Feb 13 15:16:13.574695 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:16:13.610779 kernel: scsi host1: storvsc_host_t Feb 13 15:16:13.610970 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 15:16:13.611075 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 15:16:13.574952 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:13.604199 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:13.627390 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:13.641785 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:16:13.644167 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:13.678727 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 15:16:13.678749 kernel: hv_netvsc 0022487e-99e6-0022-487e-99e60022487e eth0: VF slot 1 added Feb 13 15:16:13.678892 kernel: hv_vmbus: registering driver hv_utils Feb 13 15:16:13.667440 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:16:13.231182 kernel: hv_vmbus: registering driver hv_pci Feb 13 15:16:13.260793 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 15:16:13.260812 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 15:16:13.260820 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 15:16:13.260829 kernel: hv_pci f9cc16e4-3049-408a-a923-7ca851315bf5: PCI VMBus probing: Using version 0x10004 Feb 13 15:16:13.332582 kernel: hv_pci f9cc16e4-3049-408a-a923-7ca851315bf5: PCI host bridge to bus 3049:00 Feb 13 15:16:13.332689 systemd-journald[218]: Time jumped backwards, rotating. Feb 13 15:16:13.332743 kernel: pci_bus 3049:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 13 15:16:13.332839 kernel: pci_bus 3049:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 15:16:13.332934 kernel: pci 3049:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 13 15:16:13.333038 kernel: pci 3049:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 15:16:13.333183 kernel: pci 3049:00:02.0: enabling Extended Tags Feb 13 15:16:13.333291 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 15:16:13.333386 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:16:13.333396 kernel: pci 3049:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 3049:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 13 15:16:13.333480 kernel: pci_bus 3049:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 15:16:13.333555 kernel: pci 3049:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 15:16:13.333634 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 15:16:13.365623 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 15:16:13.365742 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 15:16:13.365827 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 15:16:13.365923 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 15:16:13.366014 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 15:16:13.366136 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:16:13.366146 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 15:16:13.686034 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:13.225675 systemd-resolved[254]: Clock change detected. Flushing caches. Feb 13 15:16:13.271283 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:13.305275 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:16:13.351644 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:13.419766 kernel: mlx5_core 3049:00:02.0: enabling device (0000 -> 0002) Feb 13 15:16:13.636476 kernel: mlx5_core 3049:00:02.0: firmware version: 16.30.1284 Feb 13 15:16:13.636612 kernel: hv_netvsc 0022487e-99e6-0022-487e-99e60022487e eth0: VF registering: eth1 Feb 13 15:16:13.637064 kernel: mlx5_core 3049:00:02.0 eth1: joined to eth0 Feb 13 15:16:13.637222 kernel: mlx5_core 3049:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Feb 13 15:16:13.644130 kernel: mlx5_core 3049:00:02.0 enP12361s1: renamed from eth1 Feb 13 15:16:13.884805 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 15:16:13.971111 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (489) Feb 13 15:16:13.988440 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 15:16:14.112115 kernel: BTRFS: device fsid b3d3c5e7-c505-4391-bb7a-de2a572c0855 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (490) Feb 13 15:16:14.126963 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 15:16:14.134050 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 15:16:14.159339 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:16:14.186954 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 15:16:14.202202 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:16:15.212144 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:16:15.212193 disk-uuid[606]: The operation has completed successfully. Feb 13 15:16:15.281175 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:16:15.281278 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:16:15.318225 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:16:15.331448 sh[692]: Success Feb 13 15:16:15.365140 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:16:15.563839 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:16:15.581210 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:16:15.591131 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:16:15.626108 kernel: BTRFS info (device dm-0): first mount of filesystem b3d3c5e7-c505-4391-bb7a-de2a572c0855 Feb 13 15:16:15.626158 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:15.626168 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:16:15.635316 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:16:15.639391 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:16:15.879319 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:16:15.884286 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:16:15.904373 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:16:15.932022 kernel: BTRFS info (device sda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:16:15.932041 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:15.930253 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:16:15.951376 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:16:15.970294 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:16:15.986589 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:16:15.991310 kernel: BTRFS info (device sda6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:16:15.994683 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:16:16.009327 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:16:16.052142 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:16:16.070248 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:16:16.100335 systemd-networkd[877]: lo: Link UP Feb 13 15:16:16.100341 systemd-networkd[877]: lo: Gained carrier Feb 13 15:16:16.105474 systemd-networkd[877]: Enumeration completed Feb 13 15:16:16.108342 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:16:16.114806 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:16.114809 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:16:16.115272 systemd[1]: Reached target network.target - Network. Feb 13 15:16:16.172116 kernel: mlx5_core 3049:00:02.0 enP12361s1: Link up Feb 13 15:16:16.212114 kernel: hv_netvsc 0022487e-99e6-0022-487e-99e60022487e eth0: Data path switched to VF: enP12361s1 Feb 13 15:16:16.212684 systemd-networkd[877]: enP12361s1: Link UP Feb 13 15:16:16.212806 systemd-networkd[877]: eth0: Link UP Feb 13 15:16:16.212943 systemd-networkd[877]: eth0: Gained carrier Feb 13 15:16:16.212952 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:16.224541 systemd-networkd[877]: enP12361s1: Gained carrier Feb 13 15:16:16.245127 systemd-networkd[877]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 15:16:16.953866 ignition[824]: Ignition 2.20.0 Feb 13 15:16:16.953881 ignition[824]: Stage: fetch-offline Feb 13 15:16:16.958767 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:16:16.953936 ignition[824]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:16.953946 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:16:16.954044 ignition[824]: parsed url from cmdline: "" Feb 13 15:16:16.979323 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:16:16.954047 ignition[824]: no config URL provided Feb 13 15:16:16.954052 ignition[824]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:16:16.954059 ignition[824]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:16:16.954063 ignition[824]: failed to fetch config: resource requires networking Feb 13 15:16:16.954249 ignition[824]: Ignition finished successfully Feb 13 15:16:16.997915 ignition[888]: Ignition 2.20.0 Feb 13 15:16:16.997921 ignition[888]: Stage: fetch Feb 13 15:16:16.998107 ignition[888]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:16.998116 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:16:16.998210 ignition[888]: parsed url from cmdline: "" Feb 13 15:16:16.998214 ignition[888]: no config URL provided Feb 13 15:16:16.998218 ignition[888]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:16:16.998225 ignition[888]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:16:16.998261 ignition[888]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 15:16:17.125734 ignition[888]: GET result: OK Feb 13 15:16:17.125829 ignition[888]: config has been read from IMDS userdata Feb 13 15:16:17.125879 ignition[888]: parsing config with SHA512: be3b9f80396ca239b1a53e22126c630cc5c9dcb30e67d2a4b0c36296c679e67f09f9653b945353f2b894a69408df3bc1f753dd27b7aef3e79098c05c97b347de Feb 13 15:16:17.136336 unknown[888]: fetched base config from "system" Feb 13 15:16:17.136348 unknown[888]: fetched base config from "system" Feb 13 15:16:17.136836 ignition[888]: fetch: fetch complete Feb 13 15:16:17.136354 unknown[888]: fetched user config from "azure" Feb 13 15:16:17.136842 ignition[888]: fetch: fetch passed Feb 13 15:16:17.138748 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:16:17.136885 ignition[888]: Ignition finished successfully Feb 13 15:16:17.155304 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:16:17.184646 ignition[894]: Ignition 2.20.0 Feb 13 15:16:17.184658 ignition[894]: Stage: kargs Feb 13 15:16:17.188104 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:16:17.184819 ignition[894]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:17.184828 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:16:17.210241 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:16:17.185734 ignition[894]: kargs: kargs passed Feb 13 15:16:17.185777 ignition[894]: Ignition finished successfully Feb 13 15:16:17.240349 ignition[901]: Ignition 2.20.0 Feb 13 15:16:17.243730 ignition[901]: Stage: disks Feb 13 15:16:17.246446 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:16:17.243925 ignition[901]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:17.254909 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:16:17.243934 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:16:17.265417 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:16:17.244845 ignition[901]: disks: disks passed Feb 13 15:16:17.276274 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:16:17.244889 ignition[901]: Ignition finished successfully Feb 13 15:16:17.287193 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:16:17.297990 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:16:17.328313 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:16:17.414263 systemd-fsck[909]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 15:16:17.422470 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:16:17.437267 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:16:17.492105 kernel: EXT4-fs (sda9): mounted filesystem f78dcc36-7881-4d16-ad8b-28e23dfbdad0 r/w with ordered data mode. Quota mode: none. Feb 13 15:16:17.492336 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:16:17.497293 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:16:17.542171 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:16:17.552203 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:16:17.561283 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:16:17.567561 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:16:17.567603 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:16:17.594052 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:16:17.634990 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (920) Feb 13 15:16:17.635022 kernel: BTRFS info (device sda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:16:17.635032 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:17.635041 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:16:17.640886 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:16:17.654031 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:16:17.654460 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:16:17.941239 systemd-networkd[877]: enP12361s1: Gained IPv6LL Feb 13 15:16:18.069375 systemd-networkd[877]: eth0: Gained IPv6LL Feb 13 15:16:18.180638 coreos-metadata[922]: Feb 13 15:16:18.180 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 15:16:18.191140 coreos-metadata[922]: Feb 13 15:16:18.191 INFO Fetch successful Feb 13 15:16:18.196172 coreos-metadata[922]: Feb 13 15:16:18.195 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 15:16:18.208802 coreos-metadata[922]: Feb 13 15:16:18.208 INFO Fetch successful Feb 13 15:16:18.222715 coreos-metadata[922]: Feb 13 15:16:18.222 INFO wrote hostname ci-4230.0.1-a-0ecc5c528f to /sysroot/etc/hostname Feb 13 15:16:18.232214 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:16:18.354376 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:16:18.378764 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:16:18.386257 initrd-setup-root[964]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:16:18.394877 initrd-setup-root[971]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:16:19.212848 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:16:19.228223 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:16:19.242223 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:16:19.253510 kernel: BTRFS info (device sda6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:16:19.258863 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:16:19.288487 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:16:19.299307 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:16:19.311282 ignition[1039]: INFO : Ignition 2.20.0 Feb 13 15:16:19.311282 ignition[1039]: INFO : Stage: mount Feb 13 15:16:19.311282 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:19.311282 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:16:19.311282 ignition[1039]: INFO : mount: mount passed Feb 13 15:16:19.311282 ignition[1039]: INFO : Ignition finished successfully Feb 13 15:16:19.318185 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:16:19.337358 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:16:19.372110 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1051) Feb 13 15:16:19.385063 kernel: BTRFS info (device sda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:16:19.385113 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:19.389237 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:16:19.396110 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:16:19.397636 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:16:19.426207 ignition[1068]: INFO : Ignition 2.20.0 Feb 13 15:16:19.426207 ignition[1068]: INFO : Stage: files Feb 13 15:16:19.434024 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:19.434024 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:16:19.434024 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:16:19.451994 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:16:19.451994 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:16:19.508286 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:16:19.515686 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:16:19.515686 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:16:19.509601 unknown[1068]: wrote ssh authorized keys file for user: core Feb 13 15:16:19.534858 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:16:19.534858 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:16:19.615066 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:16:19.735421 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:16:19.746259 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:16:19.746259 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 15:16:20.183908 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:16:20.280615 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:16:20.290197 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:16:20.290197 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:16:20.290197 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:16:20.290197 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:16:20.290197 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:16:20.290197 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:16:20.290197 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:16:20.290197 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:16:20.290197 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:16:20.374628 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:16:20.374628 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:16:20.374628 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:16:20.374628 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:16:20.374628 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 15:16:20.702150 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:16:20.903479 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:16:20.903479 ignition[1068]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:16:20.923237 ignition[1068]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:16:20.934519 ignition[1068]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:16:20.934519 ignition[1068]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:16:20.934519 ignition[1068]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:16:20.934519 ignition[1068]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:16:20.934519 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:16:20.934519 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:16:20.934519 ignition[1068]: INFO : files: files passed Feb 13 15:16:20.934519 ignition[1068]: INFO : Ignition finished successfully Feb 13 15:16:20.935459 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:16:20.974346 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:16:20.993259 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:16:21.018557 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:16:21.069798 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:16:21.018659 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:16:21.092578 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:16:21.092578 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:16:21.037377 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:16:21.045924 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:16:21.061333 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:16:21.103641 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:16:21.103746 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:16:21.118563 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:16:21.131353 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:16:21.143165 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:16:21.169329 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:16:21.204639 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:16:21.221523 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:16:21.238910 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:16:21.245604 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:16:21.258227 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:16:21.268935 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:16:21.269054 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:16:21.284896 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:16:21.296432 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:16:21.306376 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:16:21.316505 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:16:21.328695 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:16:21.342108 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:16:21.353235 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:16:21.364785 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:16:21.376583 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:16:21.388211 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:16:21.397539 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:16:21.397671 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:16:21.412494 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:16:21.418962 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:16:21.431678 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:16:21.431745 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:16:21.444455 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:16:21.444575 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:16:21.461665 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:16:21.461840 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:16:21.475421 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:16:21.475564 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:16:21.485921 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:16:21.486068 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:16:21.522216 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:16:21.549524 ignition[1121]: INFO : Ignition 2.20.0 Feb 13 15:16:21.549524 ignition[1121]: INFO : Stage: umount Feb 13 15:16:21.549524 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:21.549524 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:16:21.549524 ignition[1121]: INFO : umount: umount passed Feb 13 15:16:21.549524 ignition[1121]: INFO : Ignition finished successfully Feb 13 15:16:21.544515 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:16:21.553649 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:16:21.553797 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:16:21.569691 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:16:21.569813 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:16:21.584354 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:16:21.584442 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:16:21.592536 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:16:21.592641 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:16:21.612387 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:16:21.612480 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:16:21.623342 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:16:21.623388 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:16:21.634020 systemd[1]: Stopped target network.target - Network. Feb 13 15:16:21.639341 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:16:21.639401 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:16:21.652240 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:16:21.662311 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:16:21.664108 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:16:21.679401 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:16:21.689508 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:16:21.700017 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:16:21.700070 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:16:21.710868 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:16:21.710898 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:16:21.722179 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:16:21.722231 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:16:21.732996 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:16:21.733033 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:16:21.743931 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:16:21.755232 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:16:21.766184 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:16:21.766787 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:16:21.766867 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:16:21.780135 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:16:21.780280 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:16:21.797336 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:16:21.797565 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:16:21.797654 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:16:22.012114 kernel: hv_netvsc 0022487e-99e6-0022-487e-99e60022487e eth0: Data path switched from VF: enP12361s1 Feb 13 15:16:21.817183 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:16:21.819811 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:16:21.819881 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:16:21.843258 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:16:21.854783 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:16:21.854868 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:16:21.867374 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:16:21.867427 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:16:21.883081 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:16:21.883145 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:16:21.889166 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:16:21.889212 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:16:21.906292 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:16:21.918065 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:16:21.918144 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:16:21.932019 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:16:21.932222 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:16:21.946282 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:16:21.946328 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:16:21.958022 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:16:21.958051 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:16:21.969400 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:16:21.969463 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:16:21.995864 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:16:21.995926 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:16:22.012166 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:16:22.012222 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:22.042311 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:16:22.058705 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:16:22.058780 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:16:22.085388 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:16:22.085450 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:16:22.092601 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:16:22.092655 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:16:22.106549 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:16:22.106596 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:22.315676 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Feb 13 15:16:22.131079 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 15:16:22.131162 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:16:22.131510 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:16:22.131606 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:16:22.142772 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:16:22.142861 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:16:22.154288 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:16:22.154368 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:16:22.168626 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:16:22.178049 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:16:22.178131 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:16:22.199229 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:16:22.214095 systemd[1]: Switching root. Feb 13 15:16:22.389202 systemd-journald[218]: Journal stopped Feb 13 15:16:26.941920 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:16:26.941941 kernel: SELinux: policy capability open_perms=1 Feb 13 15:16:26.941951 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:16:26.941958 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:16:26.941970 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:16:26.941977 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:16:26.941986 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:16:26.941994 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:16:26.942002 kernel: audit: type=1403 audit(1739459783.044:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:16:26.942012 systemd[1]: Successfully loaded SELinux policy in 104.768ms. Feb 13 15:16:26.942023 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.489ms. Feb 13 15:16:26.942033 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:16:26.942041 systemd[1]: Detected virtualization microsoft. Feb 13 15:16:26.942049 systemd[1]: Detected architecture arm64. Feb 13 15:16:26.942058 systemd[1]: Detected first boot. Feb 13 15:16:26.942068 systemd[1]: Hostname set to <ci-4230.0.1-a-0ecc5c528f>. Feb 13 15:16:26.942077 systemd[1]: Initializing machine ID from random generator. Feb 13 15:16:26.942111 zram_generator::config[1167]: No configuration found. Feb 13 15:16:26.942123 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:16:26.942131 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:16:26.942141 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:16:26.942149 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:16:26.942160 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:16:26.942168 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:16:26.942177 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:16:26.942187 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:16:26.942198 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:16:26.942207 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:16:26.942216 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:16:26.942227 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:16:26.942236 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:16:26.942245 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:16:26.942254 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:16:26.942263 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:16:26.942272 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:16:26.942281 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:16:26.942290 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:16:26.942300 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:16:26.942309 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:16:26.942318 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:16:26.942330 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:16:26.942338 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:16:26.942347 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:16:26.942356 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:16:26.942365 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:16:26.942376 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:16:26.942385 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:16:26.942395 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:16:26.942404 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:16:26.942413 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:16:26.942422 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:16:26.942433 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:16:26.942442 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:16:26.942451 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:16:26.942460 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:16:26.942469 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:16:26.942478 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:16:26.942487 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:16:26.942498 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:16:26.942507 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:16:26.942516 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:16:26.942526 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:16:26.942535 systemd[1]: Reached target machines.target - Containers. Feb 13 15:16:26.942544 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:16:26.942553 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:16:26.942562 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:16:26.942573 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:16:26.942582 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:16:26.942592 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:16:26.942602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:16:26.942611 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:16:26.942620 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:16:26.942629 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:16:26.942639 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:16:26.942649 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:16:26.942659 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:16:26.942668 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:16:26.942677 kernel: fuse: init (API version 7.39) Feb 13 15:16:26.942686 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:16:26.942695 kernel: loop: module loaded Feb 13 15:16:26.942704 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:16:26.942713 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:16:26.942722 kernel: ACPI: bus type drm_connector registered Feb 13 15:16:26.942732 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:16:26.942742 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:16:26.942751 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:16:26.942780 systemd-journald[1271]: Collecting audit messages is disabled. Feb 13 15:16:26.942801 systemd-journald[1271]: Journal started Feb 13 15:16:26.942821 systemd-journald[1271]: Runtime Journal (/run/log/journal/e2988b96301f4c3ea5017bc9062c9210) is 8M, max 78.5M, 70.5M free. Feb 13 15:16:26.000671 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:16:26.011944 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:16:26.012342 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:16:26.013468 systemd[1]: systemd-journald.service: Consumed 3.226s CPU time. Feb 13 15:16:26.955934 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:16:26.960039 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:16:26.966125 systemd[1]: Stopped verity-setup.service. Feb 13 15:16:26.983557 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:16:26.984366 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:16:26.990648 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:16:26.997208 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:16:27.003296 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:16:27.009265 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:16:27.015259 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:16:27.020690 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:16:27.028650 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:16:27.036172 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:16:27.036321 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:16:27.042789 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:16:27.044173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:16:27.050931 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:16:27.051070 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:16:27.057254 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:16:27.057393 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:16:27.064063 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:16:27.064242 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:16:27.073585 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:16:27.073735 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:16:27.080220 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:16:27.086573 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:16:27.093648 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:16:27.100937 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:16:27.108015 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:16:27.124982 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:16:27.137164 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:16:27.144936 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:16:27.151697 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:16:27.151732 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:16:27.158174 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:16:27.166148 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:16:27.173045 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:16:27.178680 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:16:27.206253 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:16:27.213339 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:16:27.219730 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:16:27.220784 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:16:27.229934 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:16:27.231329 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:16:27.238274 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:16:27.247292 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:16:27.262248 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:16:27.271857 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:16:27.277743 systemd-journald[1271]: Time spent on flushing to /var/log/journal/e2988b96301f4c3ea5017bc9062c9210 is 54.689ms for 919 entries. Feb 13 15:16:27.277743 systemd-journald[1271]: System Journal (/var/log/journal/e2988b96301f4c3ea5017bc9062c9210) is 11.8M, max 2.6G, 2.6G free. Feb 13 15:16:27.412069 systemd-journald[1271]: Received client request to flush runtime journal. Feb 13 15:16:27.412146 systemd-journald[1271]: /var/log/journal/e2988b96301f4c3ea5017bc9062c9210/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Feb 13 15:16:27.412167 systemd-journald[1271]: Rotating system journal. Feb 13 15:16:27.412189 kernel: loop0: detected capacity change from 0 to 123192 Feb 13 15:16:27.285365 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:16:27.292497 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:16:27.299644 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:16:27.311585 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:16:27.339656 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:16:27.346320 udevadm[1311]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:16:27.415692 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:16:27.422853 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:16:27.432568 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:16:27.433205 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:16:27.468798 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Feb 13 15:16:27.468818 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Feb 13 15:16:27.474898 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:16:27.491328 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:16:27.553777 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:16:27.573244 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:16:27.586623 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Feb 13 15:16:27.586637 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Feb 13 15:16:27.591176 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:16:27.889613 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:16:27.935125 kernel: loop1: detected capacity change from 0 to 28720 Feb 13 15:16:28.392213 kernel: loop2: detected capacity change from 0 to 113512 Feb 13 15:16:28.398374 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:16:28.409333 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:16:28.437362 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Feb 13 15:16:28.697123 kernel: loop3: detected capacity change from 0 to 189592 Feb 13 15:16:28.745130 kernel: loop4: detected capacity change from 0 to 123192 Feb 13 15:16:28.756083 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:16:28.772763 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:16:28.783544 kernel: loop5: detected capacity change from 0 to 28720 Feb 13 15:16:28.798128 kernel: loop6: detected capacity change from 0 to 113512 Feb 13 15:16:28.820188 kernel: loop7: detected capacity change from 0 to 189592 Feb 13 15:16:28.833307 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:16:28.844877 (sd-merge)[1338]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Feb 13 15:16:28.845428 (sd-merge)[1338]: Merged extensions into '/usr'. Feb 13 15:16:28.889545 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:16:28.892658 systemd[1]: Reload requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:16:28.892761 systemd[1]: Reloading... Feb 13 15:16:28.984124 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:16:28.997133 zram_generator::config[1405]: No configuration found. Feb 13 15:16:29.023122 kernel: hv_vmbus: registering driver hv_balloon Feb 13 15:16:29.034947 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 13 15:16:29.035026 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 13 15:16:29.044149 kernel: hv_vmbus: registering driver hyperv_fb Feb 13 15:16:29.044229 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 13 15:16:29.056880 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 13 15:16:29.066330 kernel: Console: switching to colour dummy device 80x25 Feb 13 15:16:29.074683 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:16:29.147195 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1353) Feb 13 15:16:29.201496 systemd-networkd[1352]: lo: Link UP Feb 13 15:16:29.201505 systemd-networkd[1352]: lo: Gained carrier Feb 13 15:16:29.210391 systemd-networkd[1352]: Enumeration completed Feb 13 15:16:29.213429 systemd-networkd[1352]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:29.213521 systemd-networkd[1352]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:16:29.229718 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:16:29.274107 kernel: mlx5_core 3049:00:02.0 enP12361s1: Link up Feb 13 15:16:29.300130 kernel: hv_netvsc 0022487e-99e6-0022-487e-99e60022487e eth0: Data path switched to VF: enP12361s1 Feb 13 15:16:29.300749 systemd-networkd[1352]: enP12361s1: Link UP Feb 13 15:16:29.300853 systemd-networkd[1352]: eth0: Link UP Feb 13 15:16:29.300862 systemd-networkd[1352]: eth0: Gained carrier Feb 13 15:16:29.300875 systemd-networkd[1352]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:29.307377 systemd-networkd[1352]: enP12361s1: Gained carrier Feb 13 15:16:29.315131 systemd-networkd[1352]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 15:16:29.390738 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 15:16:29.397434 systemd[1]: Reloading finished in 504 ms. Feb 13 15:16:29.422063 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:16:29.428045 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:16:29.435164 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:16:29.484351 systemd[1]: Starting ensure-sysext.service... Feb 13 15:16:29.491360 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:16:29.499734 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:16:29.508380 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:16:29.520191 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:16:29.530922 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:29.548343 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:16:29.549734 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:16:29.550620 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:16:29.550996 systemd-tmpfiles[1532]: ACLs are not supported, ignoring. Feb 13 15:16:29.551217 systemd[1]: Reload requested from client PID 1528 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:16:29.551233 systemd[1]: Reloading... Feb 13 15:16:29.553177 systemd-tmpfiles[1532]: ACLs are not supported, ignoring. Feb 13 15:16:29.564255 systemd-tmpfiles[1532]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:16:29.564886 systemd-tmpfiles[1532]: Skipping /boot Feb 13 15:16:29.577277 systemd-tmpfiles[1532]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:16:29.577294 systemd-tmpfiles[1532]: Skipping /boot Feb 13 15:16:29.634211 zram_generator::config[1574]: No configuration found. Feb 13 15:16:29.741711 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:16:29.840666 systemd[1]: Reloading finished in 289 ms. Feb 13 15:16:29.850175 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:16:29.869943 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:16:29.877673 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:16:29.886119 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:16:29.894131 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:29.913351 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:16:29.920261 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:16:29.929388 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:16:29.937387 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:16:29.952357 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:16:29.960294 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:16:29.969630 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:16:29.972378 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:16:29.984365 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:16:29.996360 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:16:30.006047 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:16:30.006188 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:16:30.008983 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:16:30.009219 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:16:30.015813 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:16:30.015976 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:16:30.023288 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:16:30.023435 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:16:30.036927 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:16:30.050772 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:16:30.057313 lvm[1636]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:16:30.059401 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:16:30.072423 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:16:30.083881 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:16:30.094640 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:16:30.100796 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:16:30.100924 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:16:30.101068 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:16:30.108868 augenrules[1671]: No rules Feb 13 15:16:30.110804 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:16:30.111008 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:16:30.117704 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:16:30.120229 systemd-resolved[1644]: Positive Trust Anchors: Feb 13 15:16:30.120537 systemd-resolved[1644]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:16:30.120613 systemd-resolved[1644]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:16:30.126272 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:16:30.126433 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:16:30.133213 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:16:30.138382 systemd-resolved[1644]: Using system hostname 'ci-4230.0.1-a-0ecc5c528f'. Feb 13 15:16:30.141507 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:16:30.149436 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:16:30.149620 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:16:30.156657 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:16:30.156971 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:16:30.164299 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:16:30.164447 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:16:30.174751 systemd[1]: Finished ensure-sysext.service. Feb 13 15:16:30.183850 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:16:30.190000 systemd[1]: Reached target network.target - Network. Feb 13 15:16:30.195219 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:16:30.206225 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:16:30.210360 lvm[1684]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:16:30.212718 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:16:30.212790 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:16:30.236523 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:16:30.476472 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:16:30.484601 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:16:30.549232 systemd-networkd[1352]: enP12361s1: Gained IPv6LL Feb 13 15:16:30.869186 systemd-networkd[1352]: eth0: Gained IPv6LL Feb 13 15:16:30.872128 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:16:30.879578 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:16:33.188446 ldconfig[1302]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:16:33.199998 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:16:33.213238 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:16:33.226257 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:16:33.232979 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:16:33.238613 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:16:33.245503 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:16:33.252651 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:16:33.258478 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:16:33.266081 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:16:33.273209 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:16:33.273244 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:16:33.278469 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:16:33.312445 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:16:33.320261 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:16:33.327924 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:16:33.335623 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:16:33.343514 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:16:33.351694 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:16:33.357620 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:16:33.364664 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:16:33.370636 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:16:33.375825 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:16:33.380858 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:16:33.380891 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:16:33.394192 systemd[1]: Starting chronyd.service - NTP client/server... Feb 13 15:16:33.403281 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:16:33.414282 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:16:33.430239 (chronyd)[1693]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Feb 13 15:16:33.436329 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:16:33.442369 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:16:33.449332 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:16:33.454720 chronyd[1703]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Feb 13 15:16:33.457951 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:16:33.457994 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Feb 13 15:16:33.460493 jq[1700]: false Feb 13 15:16:33.468046 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Feb 13 15:16:33.469276 KVP[1704]: KVP starting; pid is:1704 Feb 13 15:16:33.477301 KVP[1704]: KVP LIC Version: 3.1 Feb 13 15:16:33.477642 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Feb 13 15:16:33.478112 kernel: hv_utils: KVP IC version 4.0 Feb 13 15:16:33.478659 chronyd[1703]: Timezone right/UTC failed leap second check, ignoring Feb 13 15:16:33.478833 chronyd[1703]: Loaded seccomp filter (level 2) Feb 13 15:16:33.479693 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:16:33.488550 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:16:33.497635 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:16:33.508310 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:16:33.520409 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:16:33.528497 extend-filesystems[1701]: Found loop4 Feb 13 15:16:33.538598 extend-filesystems[1701]: Found loop5 Feb 13 15:16:33.538598 extend-filesystems[1701]: Found loop6 Feb 13 15:16:33.538598 extend-filesystems[1701]: Found loop7 Feb 13 15:16:33.538598 extend-filesystems[1701]: Found sda Feb 13 15:16:33.538598 extend-filesystems[1701]: Found sda1 Feb 13 15:16:33.538598 extend-filesystems[1701]: Found sda2 Feb 13 15:16:33.538598 extend-filesystems[1701]: Found sda3 Feb 13 15:16:33.538598 extend-filesystems[1701]: Found usr Feb 13 15:16:33.538598 extend-filesystems[1701]: Found sda4 Feb 13 15:16:33.538598 extend-filesystems[1701]: Found sda6 Feb 13 15:16:33.538598 extend-filesystems[1701]: Found sda7 Feb 13 15:16:33.538598 extend-filesystems[1701]: Found sda9 Feb 13 15:16:33.538598 extend-filesystems[1701]: Checking size of /dev/sda9 Feb 13 15:16:33.532320 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:16:33.681585 dbus-daemon[1698]: [system] SELinux support is enabled Feb 13 15:16:33.843619 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1745) Feb 13 15:16:33.843641 extend-filesystems[1701]: Old size kept for /dev/sda9 Feb 13 15:16:33.843641 extend-filesystems[1701]: Found sr0 Feb 13 15:16:33.549307 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:16:33.878558 coreos-metadata[1695]: Feb 13 15:16:33.790 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 15:16:33.878558 coreos-metadata[1695]: Feb 13 15:16:33.796 INFO Fetch successful Feb 13 15:16:33.878558 coreos-metadata[1695]: Feb 13 15:16:33.796 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 13 15:16:33.878558 coreos-metadata[1695]: Feb 13 15:16:33.800 INFO Fetch successful Feb 13 15:16:33.878558 coreos-metadata[1695]: Feb 13 15:16:33.800 INFO Fetching http://168.63.129.16/machine/83dbf687-4291-433b-8c18-9af154a19daf/a1c723ab%2D91d4%2D48a4%2Dbeca%2D21e727742452.%5Fci%2D4230.0.1%2Da%2D0ecc5c528f?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 13 15:16:33.878558 coreos-metadata[1695]: Feb 13 15:16:33.802 INFO Fetch successful Feb 13 15:16:33.878558 coreos-metadata[1695]: Feb 13 15:16:33.802 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 13 15:16:33.878558 coreos-metadata[1695]: Feb 13 15:16:33.821 INFO Fetch successful Feb 13 15:16:33.571546 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:16:33.572073 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:16:33.878954 update_engine[1726]: I20250213 15:16:33.731050 1726 main.cc:92] Flatcar Update Engine starting Feb 13 15:16:33.878954 update_engine[1726]: I20250213 15:16:33.745461 1726 update_check_scheduler.cc:74] Next update check in 5m55s Feb 13 15:16:33.583593 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:16:33.882370 jq[1732]: true Feb 13 15:16:33.613223 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:16:33.629746 systemd[1]: Started chronyd.service - NTP client/server. Feb 13 15:16:33.649485 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:16:33.649707 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:16:33.650031 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:16:33.903918 jq[1792]: true Feb 13 15:16:33.650201 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:16:33.677521 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:16:33.677721 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:16:33.694637 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:16:33.716982 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:16:33.732053 systemd-logind[1720]: New seat seat0. Feb 13 15:16:33.732901 systemd-logind[1720]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Feb 13 15:16:33.835788 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:16:33.852166 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:16:33.852359 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:16:33.904380 (ntainerd)[1794]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:16:33.924562 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:16:33.972682 dbus-daemon[1698]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:16:33.975692 tar[1752]: linux-arm64/helm Feb 13 15:16:33.999590 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:16:34.010647 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:16:34.010862 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:16:34.010981 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:16:34.023217 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:16:34.023337 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:16:34.044443 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:16:34.073388 bash[1842]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:16:34.076160 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:16:34.088641 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:16:34.339736 locksmithd[1843]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:16:34.493649 tar[1752]: linux-arm64/LICENSE Feb 13 15:16:34.493851 tar[1752]: linux-arm64/README.md Feb 13 15:16:34.510116 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:16:34.534062 containerd[1794]: time="2025-02-13T15:16:34.533979560Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:16:34.561709 containerd[1794]: time="2025-02-13T15:16:34.561656600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:34.563191 containerd[1794]: time="2025-02-13T15:16:34.563157520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:16:34.563279 containerd[1794]: time="2025-02-13T15:16:34.563266560Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:16:34.563454 containerd[1794]: time="2025-02-13T15:16:34.563344000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:16:34.563600 containerd[1794]: time="2025-02-13T15:16:34.563581160Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:16:34.564582 containerd[1794]: time="2025-02-13T15:16:34.563655360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:34.564582 containerd[1794]: time="2025-02-13T15:16:34.563734160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:16:34.564582 containerd[1794]: time="2025-02-13T15:16:34.563746360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:34.564582 containerd[1794]: time="2025-02-13T15:16:34.563942760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:16:34.564582 containerd[1794]: time="2025-02-13T15:16:34.563957200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:34.564582 containerd[1794]: time="2025-02-13T15:16:34.563971080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:16:34.564582 containerd[1794]: time="2025-02-13T15:16:34.563980760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:34.564582 containerd[1794]: time="2025-02-13T15:16:34.564055760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:34.564582 containerd[1794]: time="2025-02-13T15:16:34.564262520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:34.564582 containerd[1794]: time="2025-02-13T15:16:34.564403240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:16:34.564582 containerd[1794]: time="2025-02-13T15:16:34.564416760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:16:34.564806 containerd[1794]: time="2025-02-13T15:16:34.564489440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:16:34.564806 containerd[1794]: time="2025-02-13T15:16:34.564547160Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:16:34.571856 sshd_keygen[1730]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:16:34.590116 containerd[1794]: time="2025-02-13T15:16:34.589778000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:16:34.590116 containerd[1794]: time="2025-02-13T15:16:34.589878360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:16:34.590116 containerd[1794]: time="2025-02-13T15:16:34.589895440Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:16:34.590116 containerd[1794]: time="2025-02-13T15:16:34.589946200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:16:34.590116 containerd[1794]: time="2025-02-13T15:16:34.589971680Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:16:34.590657 containerd[1794]: time="2025-02-13T15:16:34.590412120Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:16:34.597978 containerd[1794]: time="2025-02-13T15:16:34.590952800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:16:34.597978 containerd[1794]: time="2025-02-13T15:16:34.592586320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:16:34.597978 containerd[1794]: time="2025-02-13T15:16:34.592619760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:16:34.597978 containerd[1794]: time="2025-02-13T15:16:34.592644240Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:16:34.597978 containerd[1794]: time="2025-02-13T15:16:34.592660680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:16:34.597978 containerd[1794]: time="2025-02-13T15:16:34.592675320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:16:34.597978 containerd[1794]: time="2025-02-13T15:16:34.592688160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:16:34.597978 containerd[1794]: time="2025-02-13T15:16:34.592702000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:16:34.597978 containerd[1794]: time="2025-02-13T15:16:34.592717160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:16:34.597978 containerd[1794]: time="2025-02-13T15:16:34.592732320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:16:34.597978 containerd[1794]: time="2025-02-13T15:16:34.592744200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:16:34.597978 containerd[1794]: time="2025-02-13T15:16:34.592755400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:16:34.597978 containerd[1794]: time="2025-02-13T15:16:34.592779880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:16:34.597978 containerd[1794]: time="2025-02-13T15:16:34.592797840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:16:34.592173 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:16:34.598442 containerd[1794]: time="2025-02-13T15:16:34.592810080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:16:34.598442 containerd[1794]: time="2025-02-13T15:16:34.592823320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:16:34.598442 containerd[1794]: time="2025-02-13T15:16:34.593507480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:16:34.598442 containerd[1794]: time="2025-02-13T15:16:34.593523000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:16:34.598442 containerd[1794]: time="2025-02-13T15:16:34.593535360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:16:34.598442 containerd[1794]: time="2025-02-13T15:16:34.593548760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:16:34.598442 containerd[1794]: time="2025-02-13T15:16:34.593576280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:16:34.598442 containerd[1794]: time="2025-02-13T15:16:34.593591680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:16:34.598442 containerd[1794]: time="2025-02-13T15:16:34.593603040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:16:34.598442 containerd[1794]: time="2025-02-13T15:16:34.593614960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:16:34.598442 containerd[1794]: time="2025-02-13T15:16:34.593627040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:16:34.598442 containerd[1794]: time="2025-02-13T15:16:34.593650680Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:16:34.598442 containerd[1794]: time="2025-02-13T15:16:34.593673760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:16:34.598442 containerd[1794]: time="2025-02-13T15:16:34.593689400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:16:34.598442 containerd[1794]: time="2025-02-13T15:16:34.593701040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:16:34.599298 containerd[1794]: time="2025-02-13T15:16:34.593836920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:16:34.599298 containerd[1794]: time="2025-02-13T15:16:34.593860040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:16:34.599298 containerd[1794]: time="2025-02-13T15:16:34.594480240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:16:34.599298 containerd[1794]: time="2025-02-13T15:16:34.594500000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:16:34.599298 containerd[1794]: time="2025-02-13T15:16:34.594510880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:16:34.599298 containerd[1794]: time="2025-02-13T15:16:34.594524760Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:16:34.599298 containerd[1794]: time="2025-02-13T15:16:34.594534400Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:16:34.599298 containerd[1794]: time="2025-02-13T15:16:34.594557680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:16:34.599447 containerd[1794]: time="2025-02-13T15:16:34.595622320Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:16:34.599447 containerd[1794]: time="2025-02-13T15:16:34.595675360Z" level=info msg="Connect containerd service" Feb 13 15:16:34.599447 containerd[1794]: time="2025-02-13T15:16:34.596653960Z" level=info msg="using legacy CRI server" Feb 13 15:16:34.599447 containerd[1794]: time="2025-02-13T15:16:34.596680280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:16:34.599447 containerd[1794]: time="2025-02-13T15:16:34.596816280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:16:34.599447 containerd[1794]: time="2025-02-13T15:16:34.598840640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:16:34.601129 containerd[1794]: time="2025-02-13T15:16:34.599972120Z" level=info msg="Start subscribing containerd event" Feb 13 15:16:34.601129 containerd[1794]: time="2025-02-13T15:16:34.600029160Z" level=info msg="Start recovering state" Feb 13 15:16:34.601129 containerd[1794]: time="2025-02-13T15:16:34.600268480Z" level=info msg="Start event monitor" Feb 13 15:16:34.601129 containerd[1794]: time="2025-02-13T15:16:34.600284680Z" level=info msg="Start snapshots syncer" Feb 13 15:16:34.601129 containerd[1794]: time="2025-02-13T15:16:34.600294280Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:16:34.601129 containerd[1794]: time="2025-02-13T15:16:34.600301040Z" level=info msg="Start streaming server" Feb 13 15:16:34.601129 containerd[1794]: time="2025-02-13T15:16:34.600503440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:16:34.601129 containerd[1794]: time="2025-02-13T15:16:34.600550080Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:16:34.601129 containerd[1794]: time="2025-02-13T15:16:34.600603040Z" level=info msg="containerd successfully booted in 0.069794s" Feb 13 15:16:34.608284 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:16:34.623324 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Feb 13 15:16:34.629489 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:16:34.642809 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:16:34.644132 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:16:34.659389 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:16:34.667534 (kubelet)[1877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:16:34.669257 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Feb 13 15:16:34.687975 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:16:34.719188 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:16:34.733412 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:16:34.745572 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:16:34.752738 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:16:34.758340 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:16:34.766869 systemd[1]: Startup finished in 647ms (kernel) + 11.678s (initrd) + 11.826s (userspace) = 24.152s. Feb 13 15:16:35.094282 login[1888]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Feb 13 15:16:35.096978 login[1889]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:16:35.102167 kubelet[1877]: E0213 15:16:35.102080 1877 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:16:35.104657 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:16:35.109321 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:16:35.109573 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:16:35.109688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:16:35.110009 systemd[1]: kubelet.service: Consumed 661ms CPU time, 230.8M memory peak. Feb 13 15:16:35.117476 systemd-logind[1720]: New session 2 of user core. Feb 13 15:16:35.137125 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:16:35.144342 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:16:35.146977 (systemd)[1903]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:16:35.149160 systemd-logind[1720]: New session c1 of user core. Feb 13 15:16:35.327521 systemd[1903]: Queued start job for default target default.target. Feb 13 15:16:35.334021 systemd[1903]: Created slice app.slice - User Application Slice. Feb 13 15:16:35.334051 systemd[1903]: Reached target paths.target - Paths. Feb 13 15:16:35.334105 systemd[1903]: Reached target timers.target - Timers. Feb 13 15:16:35.337283 systemd[1903]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:16:35.345922 systemd[1903]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:16:35.345992 systemd[1903]: Reached target sockets.target - Sockets. Feb 13 15:16:35.346037 systemd[1903]: Reached target basic.target - Basic System. Feb 13 15:16:35.346064 systemd[1903]: Reached target default.target - Main User Target. Feb 13 15:16:35.346109 systemd[1903]: Startup finished in 191ms. Feb 13 15:16:35.346240 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:16:35.356257 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:16:36.094748 login[1888]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:16:36.100111 systemd-logind[1720]: New session 1 of user core. Feb 13 15:16:36.107428 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:16:36.428921 waagent[1879]: 2025-02-13T15:16:36.428776Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Feb 13 15:16:36.435616 waagent[1879]: 2025-02-13T15:16:36.435537Z INFO Daemon Daemon OS: flatcar 4230.0.1 Feb 13 15:16:36.443115 waagent[1879]: 2025-02-13T15:16:36.440514Z INFO Daemon Daemon Python: 3.11.11 Feb 13 15:16:36.445416 waagent[1879]: 2025-02-13T15:16:36.445223Z INFO Daemon Daemon Run daemon Feb 13 15:16:36.449590 waagent[1879]: 2025-02-13T15:16:36.449541Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.0.1' Feb 13 15:16:36.458608 waagent[1879]: 2025-02-13T15:16:36.458551Z INFO Daemon Daemon Using waagent for provisioning Feb 13 15:16:36.464157 waagent[1879]: 2025-02-13T15:16:36.464109Z INFO Daemon Daemon Activate resource disk Feb 13 15:16:36.468845 waagent[1879]: 2025-02-13T15:16:36.468795Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 13 15:16:36.481544 waagent[1879]: 2025-02-13T15:16:36.481476Z INFO Daemon Daemon Found device: None Feb 13 15:16:36.486192 waagent[1879]: 2025-02-13T15:16:36.486143Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 13 15:16:36.494736 waagent[1879]: 2025-02-13T15:16:36.494682Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 13 15:16:36.506569 waagent[1879]: 2025-02-13T15:16:36.506510Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 15:16:36.513082 waagent[1879]: 2025-02-13T15:16:36.513029Z INFO Daemon Daemon Running default provisioning handler Feb 13 15:16:36.524773 waagent[1879]: 2025-02-13T15:16:36.524152Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Feb 13 15:16:36.538777 waagent[1879]: 2025-02-13T15:16:36.538707Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 13 15:16:36.549245 waagent[1879]: 2025-02-13T15:16:36.549187Z INFO Daemon Daemon cloud-init is enabled: False Feb 13 15:16:36.554885 waagent[1879]: 2025-02-13T15:16:36.554837Z INFO Daemon Daemon Copying ovf-env.xml Feb 13 15:16:36.621706 waagent[1879]: 2025-02-13T15:16:36.618664Z INFO Daemon Daemon Successfully mounted dvd Feb 13 15:16:36.648796 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 13 15:16:36.651507 waagent[1879]: 2025-02-13T15:16:36.651439Z INFO Daemon Daemon Detect protocol endpoint Feb 13 15:16:36.657008 waagent[1879]: 2025-02-13T15:16:36.656941Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 15:16:36.663121 waagent[1879]: 2025-02-13T15:16:36.663045Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 13 15:16:36.671189 waagent[1879]: 2025-02-13T15:16:36.671118Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 13 15:16:36.676711 waagent[1879]: 2025-02-13T15:16:36.676649Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 13 15:16:36.682188 waagent[1879]: 2025-02-13T15:16:36.682076Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 13 15:16:36.713072 waagent[1879]: 2025-02-13T15:16:36.713025Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 13 15:16:36.720319 waagent[1879]: 2025-02-13T15:16:36.720288Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 13 15:16:36.725926 waagent[1879]: 2025-02-13T15:16:36.725878Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 13 15:16:36.979617 waagent[1879]: 2025-02-13T15:16:36.979477Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 13 15:16:36.988114 waagent[1879]: 2025-02-13T15:16:36.987507Z INFO Daemon Daemon Forcing an update of the goal state. Feb 13 15:16:36.999339 waagent[1879]: 2025-02-13T15:16:36.999281Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 15:16:37.019963 waagent[1879]: 2025-02-13T15:16:37.019914Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Feb 13 15:16:37.025869 waagent[1879]: 2025-02-13T15:16:37.025807Z INFO Daemon Feb 13 15:16:37.028746 waagent[1879]: 2025-02-13T15:16:37.028689Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 7c069065-0ef8-4c01-ab5b-22a754b32b39 eTag: 5970338072204945926 source: Fabric] Feb 13 15:16:37.040184 waagent[1879]: 2025-02-13T15:16:37.040133Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Feb 13 15:16:37.047437 waagent[1879]: 2025-02-13T15:16:37.047387Z INFO Daemon Feb 13 15:16:37.050438 waagent[1879]: 2025-02-13T15:16:37.050384Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Feb 13 15:16:37.070498 waagent[1879]: 2025-02-13T15:16:37.070459Z INFO Daemon Daemon Downloading artifacts profile blob Feb 13 15:16:37.232524 waagent[1879]: 2025-02-13T15:16:37.232381Z INFO Daemon Downloaded certificate {'thumbprint': 'C28FC54185F42C38F6981EF3CCB6CD0B384EA710', 'hasPrivateKey': False} Feb 13 15:16:37.244225 waagent[1879]: 2025-02-13T15:16:37.244165Z INFO Daemon Downloaded certificate {'thumbprint': '614B9EE56433F8823C044D5E41547B06E8416653', 'hasPrivateKey': True} Feb 13 15:16:37.254352 waagent[1879]: 2025-02-13T15:16:37.254289Z INFO Daemon Fetch goal state completed Feb 13 15:16:37.301123 waagent[1879]: 2025-02-13T15:16:37.301040Z INFO Daemon Daemon Starting provisioning Feb 13 15:16:37.307332 waagent[1879]: 2025-02-13T15:16:37.307242Z INFO Daemon Daemon Handle ovf-env.xml. Feb 13 15:16:37.312426 waagent[1879]: 2025-02-13T15:16:37.312354Z INFO Daemon Daemon Set hostname [ci-4230.0.1-a-0ecc5c528f] Feb 13 15:16:37.335138 waagent[1879]: 2025-02-13T15:16:37.334918Z INFO Daemon Daemon Publish hostname [ci-4230.0.1-a-0ecc5c528f] Feb 13 15:16:37.341701 waagent[1879]: 2025-02-13T15:16:37.341624Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 13 15:16:37.349129 waagent[1879]: 2025-02-13T15:16:37.348455Z INFO Daemon Daemon Primary interface is [eth0] Feb 13 15:16:37.360794 systemd-networkd[1352]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:37.361335 systemd-networkd[1352]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:16:37.361391 systemd-networkd[1352]: eth0: DHCP lease lost Feb 13 15:16:37.361707 waagent[1879]: 2025-02-13T15:16:37.361633Z INFO Daemon Daemon Create user account if not exists Feb 13 15:16:37.367485 waagent[1879]: 2025-02-13T15:16:37.367431Z INFO Daemon Daemon User core already exists, skip useradd Feb 13 15:16:37.373098 waagent[1879]: 2025-02-13T15:16:37.373037Z INFO Daemon Daemon Configure sudoer Feb 13 15:16:37.382284 waagent[1879]: 2025-02-13T15:16:37.377856Z INFO Daemon Daemon Configure sshd Feb 13 15:16:37.382965 waagent[1879]: 2025-02-13T15:16:37.382901Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Feb 13 15:16:37.396632 waagent[1879]: 2025-02-13T15:16:37.396553Z INFO Daemon Daemon Deploy ssh public key. Feb 13 15:16:37.418136 systemd-networkd[1352]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 15:16:38.513116 waagent[1879]: 2025-02-13T15:16:38.512517Z INFO Daemon Daemon Provisioning complete Feb 13 15:16:38.532049 waagent[1879]: 2025-02-13T15:16:38.531991Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 13 15:16:38.538541 waagent[1879]: 2025-02-13T15:16:38.538480Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 13 15:16:38.548229 waagent[1879]: 2025-02-13T15:16:38.548174Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Feb 13 15:16:38.683127 waagent[1957]: 2025-02-13T15:16:38.682567Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 13 15:16:38.683127 waagent[1957]: 2025-02-13T15:16:38.682724Z INFO ExtHandler ExtHandler OS: flatcar 4230.0.1 Feb 13 15:16:38.683127 waagent[1957]: 2025-02-13T15:16:38.682777Z INFO ExtHandler ExtHandler Python: 3.11.11 Feb 13 15:16:38.734620 waagent[1957]: 2025-02-13T15:16:38.734536Z INFO ExtHandler ExtHandler Distro: flatcar-4230.0.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 13 15:16:38.734959 waagent[1957]: 2025-02-13T15:16:38.734918Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:16:38.735111 waagent[1957]: 2025-02-13T15:16:38.735060Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:16:38.744417 waagent[1957]: 2025-02-13T15:16:38.744337Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 15:16:38.751289 waagent[1957]: 2025-02-13T15:16:38.751226Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Feb 13 15:16:38.751867 waagent[1957]: 2025-02-13T15:16:38.751816Z INFO ExtHandler Feb 13 15:16:38.751937 waagent[1957]: 2025-02-13T15:16:38.751906Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 486d3022-714b-4dea-8935-2271f76e1b58 eTag: 5970338072204945926 source: Fabric] Feb 13 15:16:38.752273 waagent[1957]: 2025-02-13T15:16:38.752221Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 13 15:16:38.756529 waagent[1957]: 2025-02-13T15:16:38.756463Z INFO ExtHandler Feb 13 15:16:38.756605 waagent[1957]: 2025-02-13T15:16:38.756573Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 13 15:16:38.761068 waagent[1957]: 2025-02-13T15:16:38.761025Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 13 15:16:38.844818 waagent[1957]: 2025-02-13T15:16:38.844667Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C28FC54185F42C38F6981EF3CCB6CD0B384EA710', 'hasPrivateKey': False} Feb 13 15:16:38.845233 waagent[1957]: 2025-02-13T15:16:38.845185Z INFO ExtHandler Downloaded certificate {'thumbprint': '614B9EE56433F8823C044D5E41547B06E8416653', 'hasPrivateKey': True} Feb 13 15:16:38.845657 waagent[1957]: 2025-02-13T15:16:38.845611Z INFO ExtHandler Fetch goal state completed Feb 13 15:16:38.863432 waagent[1957]: 2025-02-13T15:16:38.863375Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1957 Feb 13 15:16:38.863587 waagent[1957]: 2025-02-13T15:16:38.863550Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Feb 13 15:16:38.865425 waagent[1957]: 2025-02-13T15:16:38.865223Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.0.1', '', 'Flatcar Container Linux by Kinvolk'] Feb 13 15:16:38.865646 waagent[1957]: 2025-02-13T15:16:38.865604Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 13 15:16:39.156725 waagent[1957]: 2025-02-13T15:16:39.156614Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 13 15:16:39.156885 waagent[1957]: 2025-02-13T15:16:39.156838Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 13 15:16:39.162955 waagent[1957]: 2025-02-13T15:16:39.162681Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 13 15:16:39.169208 systemd[1]: Reload requested from client PID 1972 ('systemctl') (unit waagent.service)... Feb 13 15:16:39.169437 systemd[1]: Reloading... Feb 13 15:16:39.272236 zram_generator::config[2011]: No configuration found. Feb 13 15:16:39.377613 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:16:39.477735 systemd[1]: Reloading finished in 307 ms. Feb 13 15:16:39.493786 waagent[1957]: 2025-02-13T15:16:39.493568Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Feb 13 15:16:39.500500 systemd[1]: Reload requested from client PID 2065 ('systemctl') (unit waagent.service)... Feb 13 15:16:39.500513 systemd[1]: Reloading... Feb 13 15:16:39.587283 zram_generator::config[2107]: No configuration found. Feb 13 15:16:39.687833 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:16:39.788280 systemd[1]: Reloading finished in 287 ms. Feb 13 15:16:39.803278 waagent[1957]: 2025-02-13T15:16:39.802451Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Feb 13 15:16:39.803278 waagent[1957]: 2025-02-13T15:16:39.802623Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Feb 13 15:16:40.265896 waagent[1957]: 2025-02-13T15:16:40.265773Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 13 15:16:40.266689 waagent[1957]: 2025-02-13T15:16:40.266629Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 13 15:16:40.267570 waagent[1957]: 2025-02-13T15:16:40.267519Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 13 15:16:40.267677 waagent[1957]: 2025-02-13T15:16:40.267635Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:16:40.267771 waagent[1957]: 2025-02-13T15:16:40.267736Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:16:40.267983 waagent[1957]: 2025-02-13T15:16:40.267942Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 13 15:16:40.268482 waagent[1957]: 2025-02-13T15:16:40.268435Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 13 15:16:40.268892 waagent[1957]: 2025-02-13T15:16:40.268844Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 13 15:16:40.269008 waagent[1957]: 2025-02-13T15:16:40.268974Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:16:40.269074 waagent[1957]: 2025-02-13T15:16:40.269043Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:16:40.269240 waagent[1957]: 2025-02-13T15:16:40.269183Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 13 15:16:40.269377 waagent[1957]: 2025-02-13T15:16:40.269311Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 13 15:16:40.269377 waagent[1957]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 13 15:16:40.269377 waagent[1957]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 13 15:16:40.269377 waagent[1957]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 13 15:16:40.269377 waagent[1957]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:16:40.269377 waagent[1957]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:16:40.269377 waagent[1957]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:16:40.269585 waagent[1957]: 2025-02-13T15:16:40.269486Z INFO EnvHandler ExtHandler Configure routes Feb 13 15:16:40.269862 waagent[1957]: 2025-02-13T15:16:40.269817Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 13 15:16:40.270013 waagent[1957]: 2025-02-13T15:16:40.269976Z INFO EnvHandler ExtHandler Gateway:None Feb 13 15:16:40.270069 waagent[1957]: 2025-02-13T15:16:40.270041Z INFO EnvHandler ExtHandler Routes:None Feb 13 15:16:40.270648 waagent[1957]: 2025-02-13T15:16:40.270610Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 13 15:16:40.270824 waagent[1957]: 2025-02-13T15:16:40.270753Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 13 15:16:40.285113 waagent[1957]: 2025-02-13T15:16:40.283311Z INFO ExtHandler ExtHandler Feb 13 15:16:40.285113 waagent[1957]: 2025-02-13T15:16:40.283434Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 7a1dcbec-172b-4160-a871-3cfe94cf3282 correlation a0f7d001-3f9b-4b21-9ac6-14136e59ab99 created: 2025-02-13T15:15:22.368576Z] Feb 13 15:16:40.285113 waagent[1957]: 2025-02-13T15:16:40.283801Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 13 15:16:40.285113 waagent[1957]: 2025-02-13T15:16:40.284401Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Feb 13 15:16:40.318287 waagent[1957]: 2025-02-13T15:16:40.318137Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C76D982E-B7D4-45DE-B803-605143D41BE0;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Feb 13 15:16:40.348531 waagent[1957]: 2025-02-13T15:16:40.348441Z INFO MonitorHandler ExtHandler Network interfaces: Feb 13 15:16:40.348531 waagent[1957]: Executing ['ip', '-a', '-o', 'link']: Feb 13 15:16:40.348531 waagent[1957]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 13 15:16:40.348531 waagent[1957]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7e:99:e6 brd ff:ff:ff:ff:ff:ff Feb 13 15:16:40.348531 waagent[1957]: 3: enP12361s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7e:99:e6 brd ff:ff:ff:ff:ff:ff\ altname enP12361p0s2 Feb 13 15:16:40.348531 waagent[1957]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 13 15:16:40.348531 waagent[1957]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 13 15:16:40.348531 waagent[1957]: 2: eth0 inet 10.200.20.11/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 13 15:16:40.348531 waagent[1957]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 13 15:16:40.348531 waagent[1957]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Feb 13 15:16:40.348531 waagent[1957]: 2: eth0 inet6 fe80::222:48ff:fe7e:99e6/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 15:16:40.348531 waagent[1957]: 3: enP12361s1 inet6 fe80::222:48ff:fe7e:99e6/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 15:16:40.373182 waagent[1957]: 2025-02-13T15:16:40.372931Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 13 15:16:40.373182 waagent[1957]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:16:40.373182 waagent[1957]: pkts bytes target prot opt in out source destination Feb 13 15:16:40.373182 waagent[1957]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:16:40.373182 waagent[1957]: pkts bytes target prot opt in out source destination Feb 13 15:16:40.373182 waagent[1957]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:16:40.373182 waagent[1957]: pkts bytes target prot opt in out source destination Feb 13 15:16:40.373182 waagent[1957]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 15:16:40.373182 waagent[1957]: 5 457 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 15:16:40.373182 waagent[1957]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 15:16:40.376373 waagent[1957]: 2025-02-13T15:16:40.376303Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 13 15:16:40.376373 waagent[1957]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:16:40.376373 waagent[1957]: pkts bytes target prot opt in out source destination Feb 13 15:16:40.376373 waagent[1957]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:16:40.376373 waagent[1957]: pkts bytes target prot opt in out source destination Feb 13 15:16:40.376373 waagent[1957]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:16:40.376373 waagent[1957]: pkts bytes target prot opt in out source destination Feb 13 15:16:40.376373 waagent[1957]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 15:16:40.376373 waagent[1957]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 15:16:40.376373 waagent[1957]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 15:16:40.376930 waagent[1957]: 2025-02-13T15:16:40.376646Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 13 15:16:45.360413 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:16:45.367319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:16:45.468208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:16:45.472773 (kubelet)[2199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:16:45.530589 kubelet[2199]: E0213 15:16:45.530522 2199 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:16:45.533194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:16:45.533320 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:16:45.533992 systemd[1]: kubelet.service: Consumed 122ms CPU time, 95.4M memory peak. Feb 13 15:16:55.758218 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:16:55.768726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:16:56.211235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:16:56.214431 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:16:56.252876 kubelet[2214]: E0213 15:16:56.252781 2214 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:16:56.255203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:16:56.255434 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:16:56.255869 systemd[1]: kubelet.service: Consumed 117ms CPU time, 91.8M memory peak. Feb 13 15:16:57.275432 chronyd[1703]: Selected source PHC0 Feb 13 15:17:06.258146 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:17:06.269251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:06.537413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:06.540801 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:06.575648 kubelet[2229]: E0213 15:17:06.575568 2229 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:06.577749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:06.577899 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:06.578376 systemd[1]: kubelet.service: Consumed 115ms CPU time, 96.3M memory peak. Feb 13 15:17:16.758026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 15:17:16.767296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:17.058734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:17.062871 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:17.099063 kubelet[2244]: E0213 15:17:17.098964 2244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:17.100677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:17.100811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:17.101255 systemd[1]: kubelet.service: Consumed 118ms CPU time, 94.3M memory peak. Feb 13 15:17:17.115265 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 13 15:17:18.638669 update_engine[1726]: I20250213 15:17:18.638121 1726 update_attempter.cc:509] Updating boot flags... Feb 13 15:17:18.688124 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2266) Feb 13 15:17:18.796600 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2269) Feb 13 15:17:27.257938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 15:17:27.266269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:27.551948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:27.555928 (kubelet)[2373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:27.590345 kubelet[2373]: E0213 15:17:27.590250 2373 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:27.592579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:27.592733 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:27.593246 systemd[1]: kubelet.service: Consumed 120ms CPU time, 94.2M memory peak. Feb 13 15:17:29.174209 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:17:29.180396 systemd[1]: Started sshd@0-10.200.20.11:22-10.200.16.10:58074.service - OpenSSH per-connection server daemon (10.200.16.10:58074). Feb 13 15:17:29.756179 sshd[2381]: Accepted publickey for core from 10.200.16.10 port 58074 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:17:29.757429 sshd-session[2381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:29.762656 systemd-logind[1720]: New session 3 of user core. Feb 13 15:17:29.764255 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:17:30.158463 systemd[1]: Started sshd@1-10.200.20.11:22-10.200.16.10:58084.service - OpenSSH per-connection server daemon (10.200.16.10:58084). Feb 13 15:17:30.607299 sshd[2386]: Accepted publickey for core from 10.200.16.10 port 58084 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:17:30.608693 sshd-session[2386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:30.614044 systemd-logind[1720]: New session 4 of user core. Feb 13 15:17:30.619305 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:17:30.928313 sshd[2388]: Connection closed by 10.200.16.10 port 58084 Feb 13 15:17:30.928066 sshd-session[2386]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:30.931958 systemd[1]: sshd@1-10.200.20.11:22-10.200.16.10:58084.service: Deactivated successfully. Feb 13 15:17:30.934612 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:17:30.936764 systemd-logind[1720]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:17:30.937914 systemd-logind[1720]: Removed session 4. Feb 13 15:17:31.029326 systemd[1]: Started sshd@2-10.200.20.11:22-10.200.16.10:58092.service - OpenSSH per-connection server daemon (10.200.16.10:58092). Feb 13 15:17:31.518779 sshd[2394]: Accepted publickey for core from 10.200.16.10 port 58092 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:17:31.520029 sshd-session[2394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:31.525151 systemd-logind[1720]: New session 5 of user core. Feb 13 15:17:31.530235 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:17:31.865514 sshd[2396]: Connection closed by 10.200.16.10 port 58092 Feb 13 15:17:31.865165 sshd-session[2394]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:31.869083 systemd[1]: sshd@2-10.200.20.11:22-10.200.16.10:58092.service: Deactivated successfully. Feb 13 15:17:31.870948 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:17:31.871703 systemd-logind[1720]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:17:31.872720 systemd-logind[1720]: Removed session 5. Feb 13 15:17:31.934441 systemd[1]: Started sshd@3-10.200.20.11:22-10.200.16.10:58100.service - OpenSSH per-connection server daemon (10.200.16.10:58100). Feb 13 15:17:32.347999 sshd[2402]: Accepted publickey for core from 10.200.16.10 port 58100 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:17:32.349294 sshd-session[2402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:32.353396 systemd-logind[1720]: New session 6 of user core. Feb 13 15:17:32.366302 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:17:32.659817 sshd[2404]: Connection closed by 10.200.16.10 port 58100 Feb 13 15:17:32.659603 sshd-session[2402]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:32.662808 systemd[1]: sshd@3-10.200.20.11:22-10.200.16.10:58100.service: Deactivated successfully. Feb 13 15:17:32.664820 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:17:32.665948 systemd-logind[1720]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:17:32.667436 systemd-logind[1720]: Removed session 6. Feb 13 15:17:32.747669 systemd[1]: Started sshd@4-10.200.20.11:22-10.200.16.10:58106.service - OpenSSH per-connection server daemon (10.200.16.10:58106). Feb 13 15:17:33.197449 sshd[2410]: Accepted publickey for core from 10.200.16.10 port 58106 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:17:33.198738 sshd-session[2410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:33.204432 systemd-logind[1720]: New session 7 of user core. Feb 13 15:17:33.206372 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:17:33.525166 sudo[2413]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:17:33.525455 sudo[2413]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:33.555456 sudo[2413]: pam_unix(sudo:session): session closed for user root Feb 13 15:17:33.638240 sshd[2412]: Connection closed by 10.200.16.10 port 58106 Feb 13 15:17:33.638992 sshd-session[2410]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:33.642841 systemd[1]: sshd@4-10.200.20.11:22-10.200.16.10:58106.service: Deactivated successfully. Feb 13 15:17:33.644511 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:17:33.646334 systemd-logind[1720]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:17:33.647651 systemd-logind[1720]: Removed session 7. Feb 13 15:17:33.709022 systemd[1]: Started sshd@5-10.200.20.11:22-10.200.16.10:58114.service - OpenSSH per-connection server daemon (10.200.16.10:58114). Feb 13 15:17:34.124912 sshd[2419]: Accepted publickey for core from 10.200.16.10 port 58114 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:17:34.127319 sshd-session[2419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:34.132303 systemd-logind[1720]: New session 8 of user core. Feb 13 15:17:34.141247 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:17:34.360980 sudo[2423]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:17:34.361338 sudo[2423]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:34.364622 sudo[2423]: pam_unix(sudo:session): session closed for user root Feb 13 15:17:34.369587 sudo[2422]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:17:34.369852 sudo[2422]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:34.383776 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:17:34.406040 augenrules[2445]: No rules Feb 13 15:17:34.407227 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:17:34.407423 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:17:34.408622 sudo[2422]: pam_unix(sudo:session): session closed for user root Feb 13 15:17:34.484964 sshd[2421]: Connection closed by 10.200.16.10 port 58114 Feb 13 15:17:34.485527 sshd-session[2419]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:34.488887 systemd[1]: sshd@5-10.200.20.11:22-10.200.16.10:58114.service: Deactivated successfully. Feb 13 15:17:34.490626 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:17:34.492195 systemd-logind[1720]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:17:34.493595 systemd-logind[1720]: Removed session 8. Feb 13 15:17:34.575367 systemd[1]: Started sshd@6-10.200.20.11:22-10.200.16.10:58130.service - OpenSSH per-connection server daemon (10.200.16.10:58130). Feb 13 15:17:35.022219 sshd[2454]: Accepted publickey for core from 10.200.16.10 port 58130 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:17:35.023392 sshd-session[2454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:35.028899 systemd-logind[1720]: New session 9 of user core. Feb 13 15:17:35.035265 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:17:35.275596 sudo[2457]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:17:35.275888 sudo[2457]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:36.724350 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:17:36.724462 (dockerd)[2474]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:17:37.537753 dockerd[2474]: time="2025-02-13T15:17:37.537671177Z" level=info msg="Starting up" Feb 13 15:17:37.757948 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 15:17:37.764304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:38.242247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:38.245634 (kubelet)[2502]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:38.261932 systemd[1]: var-lib-docker-metacopy\x2dcheck1700667672-merged.mount: Deactivated successfully. Feb 13 15:17:38.284678 kubelet[2502]: E0213 15:17:38.284549 2502 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:38.288754 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:38.288900 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:38.289593 systemd[1]: kubelet.service: Consumed 118ms CPU time, 96.5M memory peak. Feb 13 15:17:38.309212 dockerd[2474]: time="2025-02-13T15:17:38.309170675Z" level=info msg="Loading containers: start." Feb 13 15:17:38.496125 kernel: Initializing XFRM netlink socket Feb 13 15:17:38.630297 systemd-networkd[1352]: docker0: Link UP Feb 13 15:17:38.667236 dockerd[2474]: time="2025-02-13T15:17:38.667200264Z" level=info msg="Loading containers: done." Feb 13 15:17:38.689141 dockerd[2474]: time="2025-02-13T15:17:38.688853866Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:17:38.689141 dockerd[2474]: time="2025-02-13T15:17:38.688951105Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:17:38.689141 dockerd[2474]: time="2025-02-13T15:17:38.689068065Z" level=info msg="Daemon has completed initialization" Feb 13 15:17:38.740340 dockerd[2474]: time="2025-02-13T15:17:38.739911242Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:17:38.740023 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:17:39.600707 containerd[1794]: time="2025-02-13T15:17:39.600648017Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 15:17:40.463898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1471942794.mount: Deactivated successfully. Feb 13 15:17:42.239154 containerd[1794]: time="2025-02-13T15:17:42.238481465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:42.241159 containerd[1794]: time="2025-02-13T15:17:42.241106255Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620375" Feb 13 15:17:42.244669 containerd[1794]: time="2025-02-13T15:17:42.244618282Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:42.252567 containerd[1794]: time="2025-02-13T15:17:42.252492134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:42.253851 containerd[1794]: time="2025-02-13T15:17:42.253664490Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 2.652976233s" Feb 13 15:17:42.253851 containerd[1794]: time="2025-02-13T15:17:42.253701770Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 15:17:42.254530 containerd[1794]: time="2025-02-13T15:17:42.254508607Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 15:17:43.870146 containerd[1794]: time="2025-02-13T15:17:43.869612538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:43.872660 containerd[1794]: time="2025-02-13T15:17:43.872436966Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471773" Feb 13 15:17:43.875767 containerd[1794]: time="2025-02-13T15:17:43.875707632Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:43.881686 containerd[1794]: time="2025-02-13T15:17:43.881620528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:43.882829 containerd[1794]: time="2025-02-13T15:17:43.882656604Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 1.628049838s" Feb 13 15:17:43.882829 containerd[1794]: time="2025-02-13T15:17:43.882691964Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 15:17:43.883553 containerd[1794]: time="2025-02-13T15:17:43.883333201Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 15:17:45.251839 containerd[1794]: time="2025-02-13T15:17:45.251777618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:45.253850 containerd[1794]: time="2025-02-13T15:17:45.253611770Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024540" Feb 13 15:17:45.258933 containerd[1794]: time="2025-02-13T15:17:45.258877908Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:45.263976 containerd[1794]: time="2025-02-13T15:17:45.263925088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:45.265316 containerd[1794]: time="2025-02-13T15:17:45.265131443Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.381768122s" Feb 13 15:17:45.265316 containerd[1794]: time="2025-02-13T15:17:45.265166562Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 15:17:45.265679 containerd[1794]: time="2025-02-13T15:17:45.265657320Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 15:17:46.425357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1229232727.mount: Deactivated successfully. Feb 13 15:17:46.764439 containerd[1794]: time="2025-02-13T15:17:46.764376355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:46.766818 containerd[1794]: time="2025-02-13T15:17:46.766773832Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256" Feb 13 15:17:46.769504 containerd[1794]: time="2025-02-13T15:17:46.769454509Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:46.773529 containerd[1794]: time="2025-02-13T15:17:46.773454944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:46.774121 containerd[1794]: time="2025-02-13T15:17:46.774017783Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.507559386s" Feb 13 15:17:46.774121 containerd[1794]: time="2025-02-13T15:17:46.774052183Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 15:17:46.774587 containerd[1794]: time="2025-02-13T15:17:46.774552862Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:17:47.448138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4266065155.mount: Deactivated successfully. Feb 13 15:17:48.507982 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 15:17:48.513274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:48.595448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:48.599311 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:49.159104 kubelet[2797]: E0213 15:17:48.765817 2797 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:48.767486 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:48.767608 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:48.768018 systemd[1]: kubelet.service: Consumed 116ms CPU time, 94.6M memory peak. Feb 13 15:17:49.182748 containerd[1794]: time="2025-02-13T15:17:49.182679098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:49.185325 containerd[1794]: time="2025-02-13T15:17:49.185271368Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 15:17:49.191501 containerd[1794]: time="2025-02-13T15:17:49.191446023Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:49.197271 containerd[1794]: time="2025-02-13T15:17:49.197190880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:49.199053 containerd[1794]: time="2025-02-13T15:17:49.198264676Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.423676414s" Feb 13 15:17:49.199053 containerd[1794]: time="2025-02-13T15:17:49.198301076Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:17:49.199053 containerd[1794]: time="2025-02-13T15:17:49.198799114Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 15:17:50.003149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount807772890.mount: Deactivated successfully. Feb 13 15:17:50.153153 containerd[1794]: time="2025-02-13T15:17:50.152577981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:50.196279 containerd[1794]: time="2025-02-13T15:17:50.196223606Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 15:17:50.242875 containerd[1794]: time="2025-02-13T15:17:50.242796260Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:50.258852 containerd[1794]: time="2025-02-13T15:17:50.258789236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:50.259808 containerd[1794]: time="2025-02-13T15:17:50.259586153Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.06075996s" Feb 13 15:17:50.259808 containerd[1794]: time="2025-02-13T15:17:50.259621313Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 15:17:50.260041 containerd[1794]: time="2025-02-13T15:17:50.260011191Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 15:17:51.963224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3185147997.mount: Deactivated successfully. Feb 13 15:17:59.007905 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Feb 13 15:17:59.016603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:07.284584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:07.294632 (kubelet)[2824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:18:07.330749 kubelet[2824]: E0213 15:18:07.330691 2824 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:18:07.333505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:18:07.333641 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:18:07.333940 systemd[1]: kubelet.service: Consumed 115ms CPU time, 94.2M memory peak. Feb 13 15:18:08.890126 containerd[1794]: time="2025-02-13T15:18:08.889364873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:08.891889 containerd[1794]: time="2025-02-13T15:18:08.891604944Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Feb 13 15:18:08.895785 containerd[1794]: time="2025-02-13T15:18:08.895756928Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:08.901083 containerd[1794]: time="2025-02-13T15:18:08.901041028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:08.902446 containerd[1794]: time="2025-02-13T15:18:08.902416703Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 18.642364752s" Feb 13 15:18:08.902638 containerd[1794]: time="2025-02-13T15:18:08.902537902Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 15:18:14.266537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:14.266803 systemd[1]: kubelet.service: Consumed 115ms CPU time, 94.2M memory peak. Feb 13 15:18:14.274392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:14.305141 systemd[1]: Reload requested from client PID 2903 ('systemctl') (unit session-9.scope)... Feb 13 15:18:14.305153 systemd[1]: Reloading... Feb 13 15:18:14.429255 zram_generator::config[2960]: No configuration found. Feb 13 15:18:14.522122 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:18:14.624344 systemd[1]: Reloading finished in 318 ms. Feb 13 15:18:14.665387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:14.670582 (kubelet)[3008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:18:14.671443 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:14.672946 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:18:14.673230 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:14.673286 systemd[1]: kubelet.service: Consumed 79ms CPU time, 82.3M memory peak. Feb 13 15:18:14.678363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:14.766314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:14.770708 (kubelet)[3020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:18:14.804738 kubelet[3020]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:18:14.805879 kubelet[3020]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:18:14.805879 kubelet[3020]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:18:14.805879 kubelet[3020]: I0213 15:18:14.805559 3020 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:18:15.462416 kubelet[3020]: I0213 15:18:15.462372 3020 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:18:15.462416 kubelet[3020]: I0213 15:18:15.462407 3020 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:18:15.462676 kubelet[3020]: I0213 15:18:15.462654 3020 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:18:15.482299 kubelet[3020]: E0213 15:18:15.482248 3020 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:15.483033 kubelet[3020]: I0213 15:18:15.482740 3020 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:18:15.488324 kubelet[3020]: E0213 15:18:15.488290 3020 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:18:15.488324 kubelet[3020]: I0213 15:18:15.488322 3020 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:18:15.492638 kubelet[3020]: I0213 15:18:15.492607 3020 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:18:15.494074 kubelet[3020]: I0213 15:18:15.494006 3020 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:18:15.494848 kubelet[3020]: I0213 15:18:15.494275 3020 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:18:15.494848 kubelet[3020]: I0213 15:18:15.494304 3020 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.0.1-a-0ecc5c528f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:18:15.494848 kubelet[3020]: I0213 15:18:15.494587 3020 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:18:15.494848 kubelet[3020]: I0213 15:18:15.494597 3020 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:18:15.495027 kubelet[3020]: I0213 15:18:15.494716 3020 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:18:15.496364 kubelet[3020]: I0213 15:18:15.496344 3020 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:18:15.496460 kubelet[3020]: I0213 15:18:15.496449 3020 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:18:15.496547 kubelet[3020]: I0213 15:18:15.496536 3020 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:18:15.496604 kubelet[3020]: I0213 15:18:15.496596 3020 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:18:15.498645 kubelet[3020]: I0213 15:18:15.498618 3020 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:18:15.500269 kubelet[3020]: I0213 15:18:15.500241 3020 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:18:15.500924 kubelet[3020]: W0213 15:18:15.500650 3020 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:18:15.501190 kubelet[3020]: I0213 15:18:15.501169 3020 server.go:1269] "Started kubelet" Feb 13 15:18:15.506111 kubelet[3020]: I0213 15:18:15.505768 3020 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:18:15.507580 kubelet[3020]: W0213 15:18:15.507538 3020 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Feb 13 15:18:15.507973 kubelet[3020]: E0213 15:18:15.507947 3020 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:15.509140 kubelet[3020]: E0213 15:18:15.508121 3020 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.0.1-a-0ecc5c528f.1823cd8f14624d7d default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.1-a-0ecc5c528f,UID:ci-4230.0.1-a-0ecc5c528f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.0.1-a-0ecc5c528f,},FirstTimestamp:2025-02-13 15:18:15.501147517 +0000 UTC m=+0.727368039,LastTimestamp:2025-02-13 15:18:15.501147517 +0000 UTC m=+0.727368039,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.1-a-0ecc5c528f,}" Feb 13 15:18:15.509375 kubelet[3020]: W0213 15:18:15.509338 3020 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-0ecc5c528f&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Feb 13 15:18:15.509474 kubelet[3020]: E0213 15:18:15.509454 3020 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-0ecc5c528f&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:15.511245 kubelet[3020]: I0213 15:18:15.510554 3020 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:18:15.513114 kubelet[3020]: I0213 15:18:15.512160 3020 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:18:15.513114 kubelet[3020]: I0213 15:18:15.511115 3020 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:18:15.513114 kubelet[3020]: I0213 15:18:15.512930 3020 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:18:15.513295 kubelet[3020]: I0213 15:18:15.513280 3020 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:18:15.513655 kubelet[3020]: I0213 15:18:15.513638 3020 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:18:15.514529 kubelet[3020]: I0213 15:18:15.511076 3020 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:18:15.514674 kubelet[3020]: E0213 15:18:15.511281 3020 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-0ecc5c528f\" not found" Feb 13 15:18:15.514967 kubelet[3020]: I0213 15:18:15.514955 3020 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:18:15.515488 kubelet[3020]: E0213 15:18:15.515463 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-0ecc5c528f?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="200ms" Feb 13 15:18:15.515804 kubelet[3020]: I0213 15:18:15.515787 3020 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:18:15.515954 kubelet[3020]: I0213 15:18:15.515938 3020 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:18:15.517752 kubelet[3020]: W0213 15:18:15.517716 3020 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Feb 13 15:18:15.517866 kubelet[3020]: E0213 15:18:15.517847 3020 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:15.518273 kubelet[3020]: I0213 15:18:15.518254 3020 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:18:15.521210 kubelet[3020]: I0213 15:18:15.521171 3020 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:18:15.522000 kubelet[3020]: I0213 15:18:15.521972 3020 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:18:15.522000 kubelet[3020]: I0213 15:18:15.521997 3020 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:18:15.522075 kubelet[3020]: I0213 15:18:15.522017 3020 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:18:15.522075 kubelet[3020]: E0213 15:18:15.522055 3020 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:18:15.528003 kubelet[3020]: W0213 15:18:15.527947 3020 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Feb 13 15:18:15.528081 kubelet[3020]: E0213 15:18:15.528006 3020 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:15.528294 kubelet[3020]: E0213 15:18:15.528268 3020 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:18:15.548024 kubelet[3020]: I0213 15:18:15.547937 3020 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:18:15.548024 kubelet[3020]: I0213 15:18:15.547956 3020 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:18:15.548024 kubelet[3020]: I0213 15:18:15.547976 3020 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:18:15.552893 kubelet[3020]: I0213 15:18:15.552872 3020 policy_none.go:49] "None policy: Start" Feb 13 15:18:15.553969 kubelet[3020]: I0213 15:18:15.553654 3020 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:18:15.553969 kubelet[3020]: I0213 15:18:15.553678 3020 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:18:15.561820 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:18:15.576108 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:18:15.580469 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:18:15.589277 kubelet[3020]: I0213 15:18:15.589244 3020 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:18:15.589482 kubelet[3020]: I0213 15:18:15.589455 3020 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:18:15.589520 kubelet[3020]: I0213 15:18:15.589476 3020 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:18:15.589939 kubelet[3020]: I0213 15:18:15.589857 3020 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:18:15.591746 kubelet[3020]: E0213 15:18:15.591713 3020 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.0.1-a-0ecc5c528f\" not found" Feb 13 15:18:15.634214 systemd[1]: Created slice kubepods-burstable-pod5e82ab61643fb97ff905ea7a72aea436.slice - libcontainer container kubepods-burstable-pod5e82ab61643fb97ff905ea7a72aea436.slice. Feb 13 15:18:15.644973 systemd[1]: Created slice kubepods-burstable-podf63f198673f552109ce248a7c6024034.slice - libcontainer container kubepods-burstable-podf63f198673f552109ce248a7c6024034.slice. Feb 13 15:18:15.648511 systemd[1]: Created slice kubepods-burstable-pod52744983882a8432a5c80d49e5bfb1fc.slice - libcontainer container kubepods-burstable-pod52744983882a8432a5c80d49e5bfb1fc.slice. Feb 13 15:18:15.691801 kubelet[3020]: I0213 15:18:15.691758 3020 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:15.692183 kubelet[3020]: E0213 15:18:15.692116 3020 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:15.716949 kubelet[3020]: I0213 15:18:15.716635 3020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52744983882a8432a5c80d49e5bfb1fc-kubeconfig\") pod \"kube-controller-manager-ci-4230.0.1-a-0ecc5c528f\" (UID: \"52744983882a8432a5c80d49e5bfb1fc\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:15.716949 kubelet[3020]: I0213 15:18:15.716683 3020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52744983882a8432a5c80d49e5bfb1fc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.0.1-a-0ecc5c528f\" (UID: \"52744983882a8432a5c80d49e5bfb1fc\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:15.716949 kubelet[3020]: I0213 15:18:15.716703 3020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e82ab61643fb97ff905ea7a72aea436-ca-certs\") pod \"kube-apiserver-ci-4230.0.1-a-0ecc5c528f\" (UID: \"5e82ab61643fb97ff905ea7a72aea436\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:15.716949 kubelet[3020]: I0213 15:18:15.716718 3020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e82ab61643fb97ff905ea7a72aea436-k8s-certs\") pod \"kube-apiserver-ci-4230.0.1-a-0ecc5c528f\" (UID: \"5e82ab61643fb97ff905ea7a72aea436\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:15.716949 kubelet[3020]: I0213 15:18:15.716735 3020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e82ab61643fb97ff905ea7a72aea436-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.0.1-a-0ecc5c528f\" (UID: \"5e82ab61643fb97ff905ea7a72aea436\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:15.717370 kubelet[3020]: I0213 15:18:15.716752 3020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52744983882a8432a5c80d49e5bfb1fc-ca-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-0ecc5c528f\" (UID: \"52744983882a8432a5c80d49e5bfb1fc\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:15.717370 kubelet[3020]: I0213 15:18:15.716767 3020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/52744983882a8432a5c80d49e5bfb1fc-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.0.1-a-0ecc5c528f\" (UID: \"52744983882a8432a5c80d49e5bfb1fc\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:15.717370 kubelet[3020]: I0213 15:18:15.716782 3020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52744983882a8432a5c80d49e5bfb1fc-k8s-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-0ecc5c528f\" (UID: \"52744983882a8432a5c80d49e5bfb1fc\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:15.717370 kubelet[3020]: I0213 15:18:15.716796 3020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f63f198673f552109ce248a7c6024034-kubeconfig\") pod \"kube-scheduler-ci-4230.0.1-a-0ecc5c528f\" (UID: \"f63f198673f552109ce248a7c6024034\") " pod="kube-system/kube-scheduler-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:15.717370 kubelet[3020]: E0213 15:18:15.716826 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-0ecc5c528f?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="400ms" Feb 13 15:18:15.893967 kubelet[3020]: I0213 15:18:15.893922 3020 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:15.894323 kubelet[3020]: E0213 15:18:15.894280 3020 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:15.943906 containerd[1794]: time="2025-02-13T15:18:15.943854469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.0.1-a-0ecc5c528f,Uid:5e82ab61643fb97ff905ea7a72aea436,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:15.949632 containerd[1794]: time="2025-02-13T15:18:15.949415209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.0.1-a-0ecc5c528f,Uid:f63f198673f552109ce248a7c6024034,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:15.951414 containerd[1794]: time="2025-02-13T15:18:15.951358402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.0.1-a-0ecc5c528f,Uid:52744983882a8432a5c80d49e5bfb1fc,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:16.117317 kubelet[3020]: E0213 15:18:16.117253 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-0ecc5c528f?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="800ms" Feb 13 15:18:16.296672 kubelet[3020]: I0213 15:18:16.296629 3020 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:16.296979 kubelet[3020]: E0213 15:18:16.296946 3020 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:16.387934 kubelet[3020]: W0213 15:18:16.387798 3020 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Feb 13 15:18:16.387934 kubelet[3020]: E0213 15:18:16.387869 3020 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:16.448376 kubelet[3020]: W0213 15:18:16.448313 3020 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Feb 13 15:18:16.448518 kubelet[3020]: E0213 15:18:16.448384 3020 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:16.610350 kubelet[3020]: W0213 15:18:16.610290 3020 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-0ecc5c528f&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Feb 13 15:18:16.610493 kubelet[3020]: E0213 15:18:16.610360 3020 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-0ecc5c528f&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:16.918403 kubelet[3020]: E0213 15:18:16.918340 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-0ecc5c528f?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="1.6s" Feb 13 15:18:16.982004 kubelet[3020]: W0213 15:18:16.981903 3020 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Feb 13 15:18:16.982004 kubelet[3020]: E0213 15:18:16.981974 3020 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:17.028200 kubelet[3020]: E0213 15:18:17.028070 3020 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.0.1-a-0ecc5c528f.1823cd8f14624d7d default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.1-a-0ecc5c528f,UID:ci-4230.0.1-a-0ecc5c528f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.0.1-a-0ecc5c528f,},FirstTimestamp:2025-02-13 15:18:15.501147517 +0000 UTC m=+0.727368039,LastTimestamp:2025-02-13 15:18:15.501147517 +0000 UTC m=+0.727368039,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.1-a-0ecc5c528f,}" Feb 13 15:18:17.098661 kubelet[3020]: I0213 15:18:17.098625 3020 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:17.099042 kubelet[3020]: E0213 15:18:17.099015 3020 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:17.257875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3072817893.mount: Deactivated successfully. Feb 13 15:18:17.295358 containerd[1794]: time="2025-02-13T15:18:17.295300841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:17.298536 containerd[1794]: time="2025-02-13T15:18:17.298483029Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 15:18:17.301440 containerd[1794]: time="2025-02-13T15:18:17.301394819Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:17.310390 containerd[1794]: time="2025-02-13T15:18:17.310338626Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:17.314166 containerd[1794]: time="2025-02-13T15:18:17.313961453Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:18:17.316856 containerd[1794]: time="2025-02-13T15:18:17.316814523Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:17.320062 containerd[1794]: time="2025-02-13T15:18:17.319970911Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:18:17.333288 containerd[1794]: time="2025-02-13T15:18:17.332390346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:17.333414 containerd[1794]: time="2025-02-13T15:18:17.333364783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.381945941s" Feb 13 15:18:17.335656 containerd[1794]: time="2025-02-13T15:18:17.335618255Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.391671226s" Feb 13 15:18:17.338051 containerd[1794]: time="2025-02-13T15:18:17.338008926Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.388533757s" Feb 13 15:18:17.607396 kubelet[3020]: E0213 15:18:17.607284 3020 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:18.307405 containerd[1794]: time="2025-02-13T15:18:18.307303959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:18.308576 containerd[1794]: time="2025-02-13T15:18:18.307378679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:18.308576 containerd[1794]: time="2025-02-13T15:18:18.308454596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:18.308839 containerd[1794]: time="2025-02-13T15:18:18.308608076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:18.309954 containerd[1794]: time="2025-02-13T15:18:18.309857673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:18.309954 containerd[1794]: time="2025-02-13T15:18:18.309923353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:18.309954 containerd[1794]: time="2025-02-13T15:18:18.309935913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:18.310235 containerd[1794]: time="2025-02-13T15:18:18.310013432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:18.311331 containerd[1794]: time="2025-02-13T15:18:18.310879750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:18.311331 containerd[1794]: time="2025-02-13T15:18:18.310936310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:18.311331 containerd[1794]: time="2025-02-13T15:18:18.310951310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:18.311905 containerd[1794]: time="2025-02-13T15:18:18.311839228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:18.342454 systemd[1]: Started cri-containerd-7eca0ca38dfd654e7ca4865c98155f7e63a453cbdce0b08d6ce097f2c04647f0.scope - libcontainer container 7eca0ca38dfd654e7ca4865c98155f7e63a453cbdce0b08d6ce097f2c04647f0. Feb 13 15:18:18.350263 systemd[1]: Started cri-containerd-edf9d7601f6fb8e1e51faa3709711d3a78df46607604af7b8cc5047b14c5e2b8.scope - libcontainer container edf9d7601f6fb8e1e51faa3709711d3a78df46607604af7b8cc5047b14c5e2b8. Feb 13 15:18:18.359344 systemd[1]: Started cri-containerd-6b296310f406819f8d6d0ba9a581475ecf98c86ddb246654ce3d244a0303bb22.scope - libcontainer container 6b296310f406819f8d6d0ba9a581475ecf98c86ddb246654ce3d244a0303bb22. Feb 13 15:18:18.402527 containerd[1794]: time="2025-02-13T15:18:18.401942838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.0.1-a-0ecc5c528f,Uid:5e82ab61643fb97ff905ea7a72aea436,Namespace:kube-system,Attempt:0,} returns sandbox id \"7eca0ca38dfd654e7ca4865c98155f7e63a453cbdce0b08d6ce097f2c04647f0\"" Feb 13 15:18:18.409711 containerd[1794]: time="2025-02-13T15:18:18.409394419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.0.1-a-0ecc5c528f,Uid:52744983882a8432a5c80d49e5bfb1fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"edf9d7601f6fb8e1e51faa3709711d3a78df46607604af7b8cc5047b14c5e2b8\"" Feb 13 15:18:18.410108 containerd[1794]: time="2025-02-13T15:18:18.410038378Z" level=info msg="CreateContainer within sandbox \"7eca0ca38dfd654e7ca4865c98155f7e63a453cbdce0b08d6ce097f2c04647f0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:18:18.413772 containerd[1794]: time="2025-02-13T15:18:18.413742528Z" level=info msg="CreateContainer within sandbox \"edf9d7601f6fb8e1e51faa3709711d3a78df46607604af7b8cc5047b14c5e2b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:18:18.425128 containerd[1794]: time="2025-02-13T15:18:18.425033940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.0.1-a-0ecc5c528f,Uid:f63f198673f552109ce248a7c6024034,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b296310f406819f8d6d0ba9a581475ecf98c86ddb246654ce3d244a0303bb22\"" Feb 13 15:18:18.428898 containerd[1794]: time="2025-02-13T15:18:18.428855490Z" level=info msg="CreateContainer within sandbox \"6b296310f406819f8d6d0ba9a581475ecf98c86ddb246654ce3d244a0303bb22\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:18:18.460801 containerd[1794]: time="2025-02-13T15:18:18.460755129Z" level=info msg="CreateContainer within sandbox \"7eca0ca38dfd654e7ca4865c98155f7e63a453cbdce0b08d6ce097f2c04647f0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f864e8af3d91e9fc60aac4073be8cba40217e1763f1f0172cec2bbd7b7f76c63\"" Feb 13 15:18:18.461601 containerd[1794]: time="2025-02-13T15:18:18.461576167Z" level=info msg="StartContainer for \"f864e8af3d91e9fc60aac4073be8cba40217e1763f1f0172cec2bbd7b7f76c63\"" Feb 13 15:18:18.481695 containerd[1794]: time="2025-02-13T15:18:18.480680118Z" level=info msg="CreateContainer within sandbox \"edf9d7601f6fb8e1e51faa3709711d3a78df46607604af7b8cc5047b14c5e2b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cdabfa8ebe8839cda926b954cb80bf8fb305b2f261ad393c6d520431aa516ddd\"" Feb 13 15:18:18.481695 containerd[1794]: time="2025-02-13T15:18:18.481382276Z" level=info msg="StartContainer for \"cdabfa8ebe8839cda926b954cb80bf8fb305b2f261ad393c6d520431aa516ddd\"" Feb 13 15:18:18.488582 systemd[1]: Started cri-containerd-f864e8af3d91e9fc60aac4073be8cba40217e1763f1f0172cec2bbd7b7f76c63.scope - libcontainer container f864e8af3d91e9fc60aac4073be8cba40217e1763f1f0172cec2bbd7b7f76c63. Feb 13 15:18:18.490452 containerd[1794]: time="2025-02-13T15:18:18.490343493Z" level=info msg="CreateContainer within sandbox \"6b296310f406819f8d6d0ba9a581475ecf98c86ddb246654ce3d244a0303bb22\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fd6f1ebdd12139c870d3a48aec6500722ed7c42e894acd29a31511a7724b3203\"" Feb 13 15:18:18.491015 containerd[1794]: time="2025-02-13T15:18:18.490993972Z" level=info msg="StartContainer for \"fd6f1ebdd12139c870d3a48aec6500722ed7c42e894acd29a31511a7724b3203\"" Feb 13 15:18:18.519080 kubelet[3020]: E0213 15:18:18.519038 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-0ecc5c528f?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="3.2s" Feb 13 15:18:18.531971 systemd[1]: Started cri-containerd-cdabfa8ebe8839cda926b954cb80bf8fb305b2f261ad393c6d520431aa516ddd.scope - libcontainer container cdabfa8ebe8839cda926b954cb80bf8fb305b2f261ad393c6d520431aa516ddd. Feb 13 15:18:18.539401 systemd[1]: Started cri-containerd-fd6f1ebdd12139c870d3a48aec6500722ed7c42e894acd29a31511a7724b3203.scope - libcontainer container fd6f1ebdd12139c870d3a48aec6500722ed7c42e894acd29a31511a7724b3203. Feb 13 15:18:18.556804 containerd[1794]: time="2025-02-13T15:18:18.556643685Z" level=info msg="StartContainer for \"f864e8af3d91e9fc60aac4073be8cba40217e1763f1f0172cec2bbd7b7f76c63\" returns successfully" Feb 13 15:18:18.604710 containerd[1794]: time="2025-02-13T15:18:18.604582803Z" level=info msg="StartContainer for \"cdabfa8ebe8839cda926b954cb80bf8fb305b2f261ad393c6d520431aa516ddd\" returns successfully" Feb 13 15:18:18.614120 containerd[1794]: time="2025-02-13T15:18:18.614057619Z" level=info msg="StartContainer for \"fd6f1ebdd12139c870d3a48aec6500722ed7c42e894acd29a31511a7724b3203\" returns successfully" Feb 13 15:18:18.704746 kubelet[3020]: I0213 15:18:18.702535 3020 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:21.421460 kubelet[3020]: I0213 15:18:21.421168 3020 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:21.510883 kubelet[3020]: I0213 15:18:21.510661 3020 apiserver.go:52] "Watching apiserver" Feb 13 15:18:21.513951 kubelet[3020]: I0213 15:18:21.513877 3020 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:18:21.624423 kubelet[3020]: E0213 15:18:21.624373 3020 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230.0.1-a-0ecc5c528f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:21.625468 kubelet[3020]: E0213 15:18:21.624633 3020 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.0.1-a-0ecc5c528f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:21.785445 kubelet[3020]: E0213 15:18:21.785402 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="6.4s" Feb 13 15:18:23.334084 kubelet[3020]: W0213 15:18:23.333760 3020 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:18:23.709381 systemd[1]: Reload requested from client PID 3294 ('systemctl') (unit session-9.scope)... Feb 13 15:18:23.709399 systemd[1]: Reloading... Feb 13 15:18:23.793169 zram_generator::config[3340]: No configuration found. Feb 13 15:18:23.905677 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:18:24.018066 systemd[1]: Reloading finished in 308 ms. Feb 13 15:18:24.038410 kubelet[3020]: I0213 15:18:24.038211 3020 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:18:24.038231 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:24.050455 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:18:24.050660 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:24.050710 systemd[1]: kubelet.service: Consumed 1.067s CPU time, 115.9M memory peak. Feb 13 15:18:24.054484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:24.244264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:24.249173 (kubelet)[3405]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:18:24.286309 kubelet[3405]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:18:24.286652 kubelet[3405]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:18:24.286696 kubelet[3405]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:18:24.286881 kubelet[3405]: I0213 15:18:24.286810 3405 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:18:24.298501 kubelet[3405]: I0213 15:18:24.298446 3405 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:18:24.298649 kubelet[3405]: I0213 15:18:24.298637 3405 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:18:24.298935 kubelet[3405]: I0213 15:18:24.298918 3405 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:18:24.302521 kubelet[3405]: I0213 15:18:24.302500 3405 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:18:24.306326 kubelet[3405]: I0213 15:18:24.306304 3405 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:18:24.307780 kubelet[3405]: E0213 15:18:24.307756 3405 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:18:24.307921 kubelet[3405]: I0213 15:18:24.307908 3405 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:18:24.310995 kubelet[3405]: I0213 15:18:24.310973 3405 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:18:24.311261 kubelet[3405]: I0213 15:18:24.311244 3405 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:18:24.311473 kubelet[3405]: I0213 15:18:24.311441 3405 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:18:24.311697 kubelet[3405]: I0213 15:18:24.311531 3405 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.0.1-a-0ecc5c528f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:18:24.311826 kubelet[3405]: I0213 15:18:24.311811 3405 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:18:24.311892 kubelet[3405]: I0213 15:18:24.311882 3405 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:18:24.311980 kubelet[3405]: I0213 15:18:24.311969 3405 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:18:24.312170 kubelet[3405]: I0213 15:18:24.312152 3405 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:18:24.312679 kubelet[3405]: I0213 15:18:24.312655 3405 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:18:24.312785 kubelet[3405]: I0213 15:18:24.312774 3405 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:18:24.312851 kubelet[3405]: I0213 15:18:24.312841 3405 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:18:24.327958 kubelet[3405]: I0213 15:18:24.327929 3405 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:18:24.328653 kubelet[3405]: I0213 15:18:24.328634 3405 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:18:24.330117 kubelet[3405]: I0213 15:18:24.329217 3405 server.go:1269] "Started kubelet" Feb 13 15:18:24.331395 kubelet[3405]: I0213 15:18:24.331377 3405 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:18:24.336054 kubelet[3405]: I0213 15:18:24.336027 3405 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:18:24.337155 kubelet[3405]: I0213 15:18:24.337137 3405 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:18:24.339365 kubelet[3405]: I0213 15:18:24.339320 3405 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:18:24.341313 kubelet[3405]: I0213 15:18:24.341294 3405 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:18:24.341649 kubelet[3405]: I0213 15:18:24.341628 3405 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:18:24.345523 kubelet[3405]: I0213 15:18:24.345502 3405 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:18:24.345865 kubelet[3405]: E0213 15:18:24.345841 3405 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-0ecc5c528f\" not found" Feb 13 15:18:24.348900 kubelet[3405]: I0213 15:18:24.348880 3405 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:18:24.350252 kubelet[3405]: I0213 15:18:24.349488 3405 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:18:24.352694 kubelet[3405]: I0213 15:18:24.352669 3405 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:18:24.352935 kubelet[3405]: I0213 15:18:24.352910 3405 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:18:24.355348 kubelet[3405]: E0213 15:18:24.355241 3405 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:18:24.356421 kubelet[3405]: I0213 15:18:24.356313 3405 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:18:24.357586 kubelet[3405]: I0213 15:18:24.357403 3405 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:18:24.361187 kubelet[3405]: I0213 15:18:24.360384 3405 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:18:24.361187 kubelet[3405]: I0213 15:18:24.360412 3405 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:18:24.361187 kubelet[3405]: I0213 15:18:24.360431 3405 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:18:24.361187 kubelet[3405]: E0213 15:18:24.360470 3405 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:18:24.411335 kubelet[3405]: I0213 15:18:24.411310 3405 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:18:24.411575 kubelet[3405]: I0213 15:18:24.411527 3405 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:18:24.411670 kubelet[3405]: I0213 15:18:24.411659 3405 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:18:24.411953 kubelet[3405]: I0213 15:18:24.411872 3405 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:18:24.411953 kubelet[3405]: I0213 15:18:24.411889 3405 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:18:24.411953 kubelet[3405]: I0213 15:18:24.411912 3405 policy_none.go:49] "None policy: Start" Feb 13 15:18:24.412725 kubelet[3405]: I0213 15:18:24.412697 3405 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:18:24.412725 kubelet[3405]: I0213 15:18:24.412723 3405 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:18:24.412885 kubelet[3405]: I0213 15:18:24.412865 3405 state_mem.go:75] "Updated machine memory state" Feb 13 15:18:24.417492 kubelet[3405]: I0213 15:18:24.416978 3405 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:18:24.417492 kubelet[3405]: I0213 15:18:24.417164 3405 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:18:24.417492 kubelet[3405]: I0213 15:18:24.417176 3405 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:18:24.417492 kubelet[3405]: I0213 15:18:24.417382 3405 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:18:24.472695 kubelet[3405]: W0213 15:18:24.472645 3405 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:18:24.473037 kubelet[3405]: W0213 15:18:24.473015 3405 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:18:24.473222 kubelet[3405]: W0213 15:18:24.473201 3405 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:18:24.473300 kubelet[3405]: E0213 15:18:24.473247 3405 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.0.1-a-0ecc5c528f\" already exists" pod="kube-system/kube-controller-manager-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:24.519844 kubelet[3405]: I0213 15:18:24.519789 3405 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:24.537158 kubelet[3405]: I0213 15:18:24.536954 3405 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:24.538406 kubelet[3405]: I0213 15:18:24.538128 3405 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:24.551002 kubelet[3405]: I0213 15:18:24.550947 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52744983882a8432a5c80d49e5bfb1fc-ca-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-0ecc5c528f\" (UID: \"52744983882a8432a5c80d49e5bfb1fc\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:24.551653 kubelet[3405]: I0213 15:18:24.551369 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/52744983882a8432a5c80d49e5bfb1fc-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.0.1-a-0ecc5c528f\" (UID: \"52744983882a8432a5c80d49e5bfb1fc\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:24.551653 kubelet[3405]: I0213 15:18:24.551401 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52744983882a8432a5c80d49e5bfb1fc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.0.1-a-0ecc5c528f\" (UID: \"52744983882a8432a5c80d49e5bfb1fc\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:24.551653 kubelet[3405]: I0213 15:18:24.551464 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e82ab61643fb97ff905ea7a72aea436-ca-certs\") pod \"kube-apiserver-ci-4230.0.1-a-0ecc5c528f\" (UID: \"5e82ab61643fb97ff905ea7a72aea436\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:24.551653 kubelet[3405]: I0213 15:18:24.551483 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e82ab61643fb97ff905ea7a72aea436-k8s-certs\") pod \"kube-apiserver-ci-4230.0.1-a-0ecc5c528f\" (UID: \"5e82ab61643fb97ff905ea7a72aea436\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:24.551653 kubelet[3405]: I0213 15:18:24.551499 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e82ab61643fb97ff905ea7a72aea436-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.0.1-a-0ecc5c528f\" (UID: \"5e82ab61643fb97ff905ea7a72aea436\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:24.551875 kubelet[3405]: I0213 15:18:24.551517 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52744983882a8432a5c80d49e5bfb1fc-k8s-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-0ecc5c528f\" (UID: \"52744983882a8432a5c80d49e5bfb1fc\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:24.551875 kubelet[3405]: I0213 15:18:24.551533 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52744983882a8432a5c80d49e5bfb1fc-kubeconfig\") pod \"kube-controller-manager-ci-4230.0.1-a-0ecc5c528f\" (UID: \"52744983882a8432a5c80d49e5bfb1fc\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:24.551875 kubelet[3405]: I0213 15:18:24.551550 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f63f198673f552109ce248a7c6024034-kubeconfig\") pod \"kube-scheduler-ci-4230.0.1-a-0ecc5c528f\" (UID: \"f63f198673f552109ce248a7c6024034\") " pod="kube-system/kube-scheduler-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:24.732648 sudo[3439]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:18:24.733304 sudo[3439]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:18:25.209329 sudo[3439]: pam_unix(sudo:session): session closed for user root Feb 13 15:18:25.323951 kubelet[3405]: I0213 15:18:25.323910 3405 apiserver.go:52] "Watching apiserver" Feb 13 15:18:25.350704 kubelet[3405]: I0213 15:18:25.350645 3405 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:18:25.404273 kubelet[3405]: W0213 15:18:25.404225 3405 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:18:25.404420 kubelet[3405]: E0213 15:18:25.404294 3405 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.0.1-a-0ecc5c528f\" already exists" pod="kube-system/kube-apiserver-ci-4230.0.1-a-0ecc5c528f" Feb 13 15:18:25.420149 kubelet[3405]: I0213 15:18:25.419221 3405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.0.1-a-0ecc5c528f" podStartSLOduration=1.419187913 podStartE2EDuration="1.419187913s" podCreationTimestamp="2025-02-13 15:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:25.418938354 +0000 UTC m=+1.166537880" watchObservedRunningTime="2025-02-13 15:18:25.419187913 +0000 UTC m=+1.166787479" Feb 13 15:18:25.443010 kubelet[3405]: I0213 15:18:25.442953 3405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.0.1-a-0ecc5c528f" podStartSLOduration=2.442935569 podStartE2EDuration="2.442935569s" podCreationTimestamp="2025-02-13 15:18:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:25.432528117 +0000 UTC m=+1.180127683" watchObservedRunningTime="2025-02-13 15:18:25.442935569 +0000 UTC m=+1.190535135" Feb 13 15:18:25.456328 kubelet[3405]: I0213 15:18:25.455873 3405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.0.1-a-0ecc5c528f" podStartSLOduration=1.455857495 podStartE2EDuration="1.455857495s" podCreationTimestamp="2025-02-13 15:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:25.443391928 +0000 UTC m=+1.190991494" watchObservedRunningTime="2025-02-13 15:18:25.455857495 +0000 UTC m=+1.203457061" Feb 13 15:18:27.061532 sudo[2457]: pam_unix(sudo:session): session closed for user root Feb 13 15:18:27.128983 sshd[2456]: Connection closed by 10.200.16.10 port 58130 Feb 13 15:18:27.129772 sshd-session[2454]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:27.134081 systemd[1]: sshd@6-10.200.20.11:22-10.200.16.10:58130.service: Deactivated successfully. Feb 13 15:18:27.136931 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:18:27.137276 systemd[1]: session-9.scope: Consumed 7.166s CPU time, 256.2M memory peak. Feb 13 15:18:27.138773 systemd-logind[1720]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:18:27.139791 systemd-logind[1720]: Removed session 9. Feb 13 15:18:29.501226 kubelet[3405]: I0213 15:18:29.501184 3405 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:18:29.501948 containerd[1794]: time="2025-02-13T15:18:29.501856184Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:18:29.502630 kubelet[3405]: I0213 15:18:29.502048 3405 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:18:30.152704 kubelet[3405]: W0213 15:18:30.152620 3405 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4230.0.1-a-0ecc5c528f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.0.1-a-0ecc5c528f' and this object Feb 13 15:18:30.152704 kubelet[3405]: E0213 15:18:30.152670 3405 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4230.0.1-a-0ecc5c528f\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.0.1-a-0ecc5c528f' and this object" logger="UnhandledError" Feb 13 15:18:30.159142 systemd[1]: Created slice kubepods-besteffort-pod1023356b_5936_46b5_be8c_ace328b02673.slice - libcontainer container kubepods-besteffort-pod1023356b_5936_46b5_be8c_ace328b02673.slice. Feb 13 15:18:30.169911 systemd[1]: Created slice kubepods-burstable-pod94a4df6d_5281_4ccd_962b_6d5840557a94.slice - libcontainer container kubepods-burstable-pod94a4df6d_5281_4ccd_962b_6d5840557a94.slice. Feb 13 15:18:30.182608 kubelet[3405]: I0213 15:18:30.182530 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-cilium-cgroup\") pod \"cilium-wld8b\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " pod="kube-system/cilium-wld8b" Feb 13 15:18:30.182608 kubelet[3405]: I0213 15:18:30.182571 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-host-proc-sys-kernel\") pod \"cilium-wld8b\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " pod="kube-system/cilium-wld8b" Feb 13 15:18:30.182608 kubelet[3405]: I0213 15:18:30.182597 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkw8v\" (UniqueName: \"kubernetes.io/projected/94a4df6d-5281-4ccd-962b-6d5840557a94-kube-api-access-gkw8v\") pod \"cilium-wld8b\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " pod="kube-system/cilium-wld8b" Feb 13 15:18:30.182608 kubelet[3405]: I0213 15:18:30.182616 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94a4df6d-5281-4ccd-962b-6d5840557a94-clustermesh-secrets\") pod \"cilium-wld8b\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " pod="kube-system/cilium-wld8b" Feb 13 15:18:30.182608 kubelet[3405]: I0213 15:18:30.182639 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-xtables-lock\") pod \"cilium-wld8b\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " pod="kube-system/cilium-wld8b" Feb 13 15:18:30.183171 kubelet[3405]: I0213 15:18:30.182654 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1023356b-5936-46b5-be8c-ace328b02673-kube-proxy\") pod \"kube-proxy-fhl89\" (UID: \"1023356b-5936-46b5-be8c-ace328b02673\") " pod="kube-system/kube-proxy-fhl89" Feb 13 15:18:30.183171 kubelet[3405]: I0213 15:18:30.182671 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1023356b-5936-46b5-be8c-ace328b02673-xtables-lock\") pod \"kube-proxy-fhl89\" (UID: \"1023356b-5936-46b5-be8c-ace328b02673\") " pod="kube-system/kube-proxy-fhl89" Feb 13 15:18:30.183171 kubelet[3405]: I0213 15:18:30.182686 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-cilium-run\") pod \"cilium-wld8b\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " pod="kube-system/cilium-wld8b" Feb 13 15:18:30.183171 kubelet[3405]: I0213 15:18:30.182703 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94a4df6d-5281-4ccd-962b-6d5840557a94-hubble-tls\") pod \"cilium-wld8b\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " pod="kube-system/cilium-wld8b" Feb 13 15:18:30.183171 kubelet[3405]: I0213 15:18:30.182717 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94a4df6d-5281-4ccd-962b-6d5840557a94-cilium-config-path\") pod \"cilium-wld8b\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " pod="kube-system/cilium-wld8b" Feb 13 15:18:30.183308 kubelet[3405]: I0213 15:18:30.182732 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jt2b\" (UniqueName: \"kubernetes.io/projected/1023356b-5936-46b5-be8c-ace328b02673-kube-api-access-9jt2b\") pod \"kube-proxy-fhl89\" (UID: \"1023356b-5936-46b5-be8c-ace328b02673\") " pod="kube-system/kube-proxy-fhl89" Feb 13 15:18:30.183308 kubelet[3405]: I0213 15:18:30.182746 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-cni-path\") pod \"cilium-wld8b\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " pod="kube-system/cilium-wld8b" Feb 13 15:18:30.183308 kubelet[3405]: I0213 15:18:30.182760 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-etc-cni-netd\") pod \"cilium-wld8b\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " pod="kube-system/cilium-wld8b" Feb 13 15:18:30.183308 kubelet[3405]: I0213 15:18:30.182773 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-lib-modules\") pod \"cilium-wld8b\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " pod="kube-system/cilium-wld8b" Feb 13 15:18:30.183308 kubelet[3405]: I0213 15:18:30.182788 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-bpf-maps\") pod \"cilium-wld8b\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " pod="kube-system/cilium-wld8b" Feb 13 15:18:30.183308 kubelet[3405]: I0213 15:18:30.182808 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-hostproc\") pod \"cilium-wld8b\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " pod="kube-system/cilium-wld8b" Feb 13 15:18:30.183431 kubelet[3405]: I0213 15:18:30.182821 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1023356b-5936-46b5-be8c-ace328b02673-lib-modules\") pod \"kube-proxy-fhl89\" (UID: \"1023356b-5936-46b5-be8c-ace328b02673\") " pod="kube-system/kube-proxy-fhl89" Feb 13 15:18:30.183431 kubelet[3405]: I0213 15:18:30.182837 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-host-proc-sys-net\") pod \"cilium-wld8b\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " pod="kube-system/cilium-wld8b" Feb 13 15:18:30.474834 containerd[1794]: time="2025-02-13T15:18:30.474721425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wld8b,Uid:94a4df6d-5281-4ccd-962b-6d5840557a94,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:30.520076 containerd[1794]: time="2025-02-13T15:18:30.519931822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:30.520076 containerd[1794]: time="2025-02-13T15:18:30.519987702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:30.520076 containerd[1794]: time="2025-02-13T15:18:30.519998502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:30.520520 containerd[1794]: time="2025-02-13T15:18:30.520125142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:30.535266 systemd[1]: Started cri-containerd-2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6.scope - libcontainer container 2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6. Feb 13 15:18:30.558386 containerd[1794]: time="2025-02-13T15:18:30.558075604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wld8b,Uid:94a4df6d-5281-4ccd-962b-6d5840557a94,Namespace:kube-system,Attempt:0,} returns sandbox id \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\"" Feb 13 15:18:30.561983 containerd[1794]: time="2025-02-13T15:18:30.561804391Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:18:30.609770 systemd[1]: Created slice kubepods-besteffort-pode4a415a5_a9c1_41a7_941c_2ac929d78ff7.slice - libcontainer container kubepods-besteffort-pode4a415a5_a9c1_41a7_941c_2ac929d78ff7.slice. Feb 13 15:18:30.685290 kubelet[3405]: I0213 15:18:30.685171 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4a415a5-a9c1-41a7-941c-2ac929d78ff7-cilium-config-path\") pod \"cilium-operator-5d85765b45-g592f\" (UID: \"e4a415a5-a9c1-41a7-941c-2ac929d78ff7\") " pod="kube-system/cilium-operator-5d85765b45-g592f" Feb 13 15:18:30.685290 kubelet[3405]: I0213 15:18:30.685227 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpwkt\" (UniqueName: \"kubernetes.io/projected/e4a415a5-a9c1-41a7-941c-2ac929d78ff7-kube-api-access-kpwkt\") pod \"cilium-operator-5d85765b45-g592f\" (UID: \"e4a415a5-a9c1-41a7-941c-2ac929d78ff7\") " pod="kube-system/cilium-operator-5d85765b45-g592f" Feb 13 15:18:30.914288 containerd[1794]: time="2025-02-13T15:18:30.914242518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-g592f,Uid:e4a415a5-a9c1-41a7-941c-2ac929d78ff7,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:30.955838 containerd[1794]: time="2025-02-13T15:18:30.955422090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:30.956255 containerd[1794]: time="2025-02-13T15:18:30.956170487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:30.956255 containerd[1794]: time="2025-02-13T15:18:30.956193367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:30.956496 containerd[1794]: time="2025-02-13T15:18:30.956471726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:30.974313 systemd[1]: Started cri-containerd-268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31.scope - libcontainer container 268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31. Feb 13 15:18:31.003688 containerd[1794]: time="2025-02-13T15:18:31.003629556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-g592f,Uid:e4a415a5-a9c1-41a7-941c-2ac929d78ff7,Namespace:kube-system,Attempt:0,} returns sandbox id \"268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31\"" Feb 13 15:18:31.367998 containerd[1794]: time="2025-02-13T15:18:31.367954160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fhl89,Uid:1023356b-5936-46b5-be8c-ace328b02673,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:31.413140 containerd[1794]: time="2025-02-13T15:18:31.412860278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:31.413140 containerd[1794]: time="2025-02-13T15:18:31.412955998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:31.413588 containerd[1794]: time="2025-02-13T15:18:31.413310836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:31.413588 containerd[1794]: time="2025-02-13T15:18:31.413464276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:31.434311 systemd[1]: Started cri-containerd-979dcdc87f67c273eb4c8fae0065d9b6869a3cc0f4f126bb1702774c8b43a8d3.scope - libcontainer container 979dcdc87f67c273eb4c8fae0065d9b6869a3cc0f4f126bb1702774c8b43a8d3. Feb 13 15:18:31.457481 containerd[1794]: time="2025-02-13T15:18:31.457444437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fhl89,Uid:1023356b-5936-46b5-be8c-ace328b02673,Namespace:kube-system,Attempt:0,} returns sandbox id \"979dcdc87f67c273eb4c8fae0065d9b6869a3cc0f4f126bb1702774c8b43a8d3\"" Feb 13 15:18:31.463849 containerd[1794]: time="2025-02-13T15:18:31.463549735Z" level=info msg="CreateContainer within sandbox \"979dcdc87f67c273eb4c8fae0065d9b6869a3cc0f4f126bb1702774c8b43a8d3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:18:31.510015 containerd[1794]: time="2025-02-13T15:18:31.509916607Z" level=info msg="CreateContainer within sandbox \"979dcdc87f67c273eb4c8fae0065d9b6869a3cc0f4f126bb1702774c8b43a8d3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fb11c4a43129c1791e1da09d651b73566abd415b73a14ea0bf650c4cc3b22097\"" Feb 13 15:18:31.510756 containerd[1794]: time="2025-02-13T15:18:31.510724364Z" level=info msg="StartContainer for \"fb11c4a43129c1791e1da09d651b73566abd415b73a14ea0bf650c4cc3b22097\"" Feb 13 15:18:31.537278 systemd[1]: Started cri-containerd-fb11c4a43129c1791e1da09d651b73566abd415b73a14ea0bf650c4cc3b22097.scope - libcontainer container fb11c4a43129c1791e1da09d651b73566abd415b73a14ea0bf650c4cc3b22097. Feb 13 15:18:31.568222 containerd[1794]: time="2025-02-13T15:18:31.568177477Z" level=info msg="StartContainer for \"fb11c4a43129c1791e1da09d651b73566abd415b73a14ea0bf650c4cc3b22097\" returns successfully" Feb 13 15:18:36.542633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3003818486.mount: Deactivated successfully. Feb 13 15:18:36.724999 kubelet[3405]: I0213 15:18:36.724638 3405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fhl89" podStartSLOduration=6.724621594 podStartE2EDuration="6.724621594s" podCreationTimestamp="2025-02-13 15:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:32.43428251 +0000 UTC m=+8.181882076" watchObservedRunningTime="2025-02-13 15:18:36.724621594 +0000 UTC m=+12.472221160" Feb 13 15:18:38.467775 containerd[1794]: time="2025-02-13T15:18:38.467715185Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:38.472566 containerd[1794]: time="2025-02-13T15:18:38.472523248Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 15:18:38.477317 containerd[1794]: time="2025-02-13T15:18:38.477262872Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:38.479482 containerd[1794]: time="2025-02-13T15:18:38.479039346Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.917152595s" Feb 13 15:18:38.479482 containerd[1794]: time="2025-02-13T15:18:38.479071546Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 15:18:38.481189 containerd[1794]: time="2025-02-13T15:18:38.480725941Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:18:38.483238 containerd[1794]: time="2025-02-13T15:18:38.483196012Z" level=info msg="CreateContainer within sandbox \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:18:38.506953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount103991224.mount: Deactivated successfully. Feb 13 15:18:38.512157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1680320925.mount: Deactivated successfully. Feb 13 15:18:38.524304 containerd[1794]: time="2025-02-13T15:18:38.524213952Z" level=info msg="CreateContainer within sandbox \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571\"" Feb 13 15:18:38.525147 containerd[1794]: time="2025-02-13T15:18:38.525048150Z" level=info msg="StartContainer for \"d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571\"" Feb 13 15:18:38.550239 systemd[1]: Started cri-containerd-d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571.scope - libcontainer container d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571. Feb 13 15:18:38.576127 containerd[1794]: time="2025-02-13T15:18:38.575528417Z" level=info msg="StartContainer for \"d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571\" returns successfully" Feb 13 15:18:38.578991 systemd[1]: cri-containerd-d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571.scope: Deactivated successfully. Feb 13 15:18:39.504291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571-rootfs.mount: Deactivated successfully. Feb 13 15:18:40.118175 containerd[1794]: time="2025-02-13T15:18:40.118113401Z" level=info msg="shim disconnected" id=d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571 namespace=k8s.io Feb 13 15:18:40.118175 containerd[1794]: time="2025-02-13T15:18:40.118169641Z" level=warning msg="cleaning up after shim disconnected" id=d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571 namespace=k8s.io Feb 13 15:18:40.118175 containerd[1794]: time="2025-02-13T15:18:40.118177601Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:40.433380 containerd[1794]: time="2025-02-13T15:18:40.433256607Z" level=info msg="CreateContainer within sandbox \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:18:40.471029 containerd[1794]: time="2025-02-13T15:18:40.470941159Z" level=info msg="CreateContainer within sandbox \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4\"" Feb 13 15:18:40.471730 containerd[1794]: time="2025-02-13T15:18:40.471479477Z" level=info msg="StartContainer for \"952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4\"" Feb 13 15:18:40.498302 systemd[1]: Started cri-containerd-952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4.scope - libcontainer container 952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4. Feb 13 15:18:40.525812 containerd[1794]: time="2025-02-13T15:18:40.525765292Z" level=info msg="StartContainer for \"952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4\" returns successfully" Feb 13 15:18:40.532894 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:18:40.533134 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:18:40.533292 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:18:40.539353 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:18:40.542884 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:18:40.543338 systemd[1]: cri-containerd-952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4.scope: Deactivated successfully. Feb 13 15:18:40.566874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4-rootfs.mount: Deactivated successfully. Feb 13 15:18:40.568501 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:18:40.583943 containerd[1794]: time="2025-02-13T15:18:40.583744375Z" level=info msg="shim disconnected" id=952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4 namespace=k8s.io Feb 13 15:18:40.583943 containerd[1794]: time="2025-02-13T15:18:40.583793294Z" level=warning msg="cleaning up after shim disconnected" id=952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4 namespace=k8s.io Feb 13 15:18:40.583943 containerd[1794]: time="2025-02-13T15:18:40.583800974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:41.439117 containerd[1794]: time="2025-02-13T15:18:41.439042779Z" level=info msg="CreateContainer within sandbox \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:18:41.488131 containerd[1794]: time="2025-02-13T15:18:41.488024529Z" level=info msg="CreateContainer within sandbox \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494\"" Feb 13 15:18:41.488962 containerd[1794]: time="2025-02-13T15:18:41.488716406Z" level=info msg="StartContainer for \"98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494\"" Feb 13 15:18:41.506125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2726719146.mount: Deactivated successfully. Feb 13 15:18:41.519372 systemd[1]: Started cri-containerd-98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494.scope - libcontainer container 98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494. Feb 13 15:18:41.558942 containerd[1794]: time="2025-02-13T15:18:41.558710603Z" level=info msg="StartContainer for \"98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494\" returns successfully" Feb 13 15:18:41.561867 systemd[1]: cri-containerd-98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494.scope: Deactivated successfully. Feb 13 15:18:41.588514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494-rootfs.mount: Deactivated successfully. Feb 13 15:18:41.630505 containerd[1794]: time="2025-02-13T15:18:41.630441193Z" level=info msg="shim disconnected" id=98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494 namespace=k8s.io Feb 13 15:18:41.630505 containerd[1794]: time="2025-02-13T15:18:41.630498793Z" level=warning msg="cleaning up after shim disconnected" id=98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494 namespace=k8s.io Feb 13 15:18:41.630505 containerd[1794]: time="2025-02-13T15:18:41.630508033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:42.445885 containerd[1794]: time="2025-02-13T15:18:42.445828795Z" level=info msg="CreateContainer within sandbox \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:18:42.491570 containerd[1794]: time="2025-02-13T15:18:42.491434076Z" level=info msg="CreateContainer within sandbox \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b\"" Feb 13 15:18:42.492595 containerd[1794]: time="2025-02-13T15:18:42.492389553Z" level=info msg="StartContainer for \"ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b\"" Feb 13 15:18:42.521243 systemd[1]: Started cri-containerd-ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b.scope - libcontainer container ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b. Feb 13 15:18:42.540630 systemd[1]: cri-containerd-ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b.scope: Deactivated successfully. Feb 13 15:18:42.547117 containerd[1794]: time="2025-02-13T15:18:42.546894083Z" level=info msg="StartContainer for \"ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b\" returns successfully" Feb 13 15:18:42.562840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b-rootfs.mount: Deactivated successfully. Feb 13 15:18:42.586356 containerd[1794]: time="2025-02-13T15:18:42.586181626Z" level=info msg="shim disconnected" id=ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b namespace=k8s.io Feb 13 15:18:42.586356 containerd[1794]: time="2025-02-13T15:18:42.586343186Z" level=warning msg="cleaning up after shim disconnected" id=ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b namespace=k8s.io Feb 13 15:18:42.586575 containerd[1794]: time="2025-02-13T15:18:42.586367706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:43.448761 containerd[1794]: time="2025-02-13T15:18:43.448716744Z" level=info msg="CreateContainer within sandbox \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:18:43.500706 containerd[1794]: time="2025-02-13T15:18:43.500659923Z" level=info msg="CreateContainer within sandbox \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540\"" Feb 13 15:18:43.501501 containerd[1794]: time="2025-02-13T15:18:43.501460680Z" level=info msg="StartContainer for \"3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540\"" Feb 13 15:18:43.532245 systemd[1]: Started cri-containerd-3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540.scope - libcontainer container 3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540. Feb 13 15:18:43.533732 containerd[1794]: time="2025-02-13T15:18:43.532906651Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:43.538468 containerd[1794]: time="2025-02-13T15:18:43.538394512Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 15:18:43.543872 containerd[1794]: time="2025-02-13T15:18:43.543720573Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:43.545842 containerd[1794]: time="2025-02-13T15:18:43.545806686Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.064614707s" Feb 13 15:18:43.545842 containerd[1794]: time="2025-02-13T15:18:43.545841526Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 15:18:43.548782 containerd[1794]: time="2025-02-13T15:18:43.548619756Z" level=info msg="CreateContainer within sandbox \"268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:18:43.571451 containerd[1794]: time="2025-02-13T15:18:43.571332917Z" level=info msg="StartContainer for \"3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540\" returns successfully" Feb 13 15:18:43.596882 containerd[1794]: time="2025-02-13T15:18:43.596835468Z" level=info msg="CreateContainer within sandbox \"268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92\"" Feb 13 15:18:43.598151 containerd[1794]: time="2025-02-13T15:18:43.598080424Z" level=info msg="StartContainer for \"ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92\"" Feb 13 15:18:43.647248 systemd[1]: Started cri-containerd-ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92.scope - libcontainer container ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92. Feb 13 15:18:43.686908 containerd[1794]: time="2025-02-13T15:18:43.686764155Z" level=info msg="StartContainer for \"ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92\" returns successfully" Feb 13 15:18:43.739673 kubelet[3405]: I0213 15:18:43.739333 3405 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 15:18:43.789354 systemd[1]: Created slice kubepods-burstable-pod28d07978_2cc0_4c29_8c2a_d97efb083ca5.slice - libcontainer container kubepods-burstable-pod28d07978_2cc0_4c29_8c2a_d97efb083ca5.slice. Feb 13 15:18:43.800437 systemd[1]: Created slice kubepods-burstable-pode19d761c_6738_4662_bbb1_4b029453119f.slice - libcontainer container kubepods-burstable-pode19d761c_6738_4662_bbb1_4b029453119f.slice. Feb 13 15:18:43.971588 kubelet[3405]: I0213 15:18:43.971432 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e19d761c-6738-4662-bbb1-4b029453119f-config-volume\") pod \"coredns-6f6b679f8f-zgcp5\" (UID: \"e19d761c-6738-4662-bbb1-4b029453119f\") " pod="kube-system/coredns-6f6b679f8f-zgcp5" Feb 13 15:18:43.971588 kubelet[3405]: I0213 15:18:43.971491 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28d07978-2cc0-4c29-8c2a-d97efb083ca5-config-volume\") pod \"coredns-6f6b679f8f-t6csz\" (UID: \"28d07978-2cc0-4c29-8c2a-d97efb083ca5\") " pod="kube-system/coredns-6f6b679f8f-t6csz" Feb 13 15:18:43.971588 kubelet[3405]: I0213 15:18:43.971513 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcshd\" (UniqueName: \"kubernetes.io/projected/e19d761c-6738-4662-bbb1-4b029453119f-kube-api-access-dcshd\") pod \"coredns-6f6b679f8f-zgcp5\" (UID: \"e19d761c-6738-4662-bbb1-4b029453119f\") " pod="kube-system/coredns-6f6b679f8f-zgcp5" Feb 13 15:18:43.971588 kubelet[3405]: I0213 15:18:43.971531 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hg6t\" (UniqueName: \"kubernetes.io/projected/28d07978-2cc0-4c29-8c2a-d97efb083ca5-kube-api-access-2hg6t\") pod \"coredns-6f6b679f8f-t6csz\" (UID: \"28d07978-2cc0-4c29-8c2a-d97efb083ca5\") " pod="kube-system/coredns-6f6b679f8f-t6csz" Feb 13 15:18:44.096676 containerd[1794]: time="2025-02-13T15:18:44.096342250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t6csz,Uid:28d07978-2cc0-4c29-8c2a-d97efb083ca5,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:44.105122 containerd[1794]: time="2025-02-13T15:18:44.104645221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zgcp5,Uid:e19d761c-6738-4662-bbb1-4b029453119f,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:44.580365 kubelet[3405]: I0213 15:18:44.580305 3405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wld8b" podStartSLOduration=6.661300336 podStartE2EDuration="14.580288805s" podCreationTimestamp="2025-02-13 15:18:30 +0000 UTC" firstStartedPulling="2025-02-13 15:18:30.561040554 +0000 UTC m=+6.308640120" lastFinishedPulling="2025-02-13 15:18:38.480029063 +0000 UTC m=+14.227628589" observedRunningTime="2025-02-13 15:18:44.57874785 +0000 UTC m=+20.326347416" watchObservedRunningTime="2025-02-13 15:18:44.580288805 +0000 UTC m=+20.327888331" Feb 13 15:18:44.580534 kubelet[3405]: I0213 15:18:44.580466 3405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-g592f" podStartSLOduration=2.038983752 podStartE2EDuration="14.580461445s" podCreationTimestamp="2025-02-13 15:18:30 +0000 UTC" firstStartedPulling="2025-02-13 15:18:31.004968751 +0000 UTC m=+6.752568317" lastFinishedPulling="2025-02-13 15:18:43.546446484 +0000 UTC m=+19.294046010" observedRunningTime="2025-02-13 15:18:44.525176357 +0000 UTC m=+20.272775923" watchObservedRunningTime="2025-02-13 15:18:44.580461445 +0000 UTC m=+20.328061011" Feb 13 15:18:46.691637 systemd-networkd[1352]: cilium_host: Link UP Feb 13 15:18:46.693490 systemd-networkd[1352]: cilium_net: Link UP Feb 13 15:18:46.696409 systemd-networkd[1352]: cilium_net: Gained carrier Feb 13 15:18:46.696596 systemd-networkd[1352]: cilium_host: Gained carrier Feb 13 15:18:46.696684 systemd-networkd[1352]: cilium_net: Gained IPv6LL Feb 13 15:18:46.696781 systemd-networkd[1352]: cilium_host: Gained IPv6LL Feb 13 15:18:46.825806 systemd-networkd[1352]: cilium_vxlan: Link UP Feb 13 15:18:46.826179 systemd-networkd[1352]: cilium_vxlan: Gained carrier Feb 13 15:18:47.103120 kernel: NET: Registered PF_ALG protocol family Feb 13 15:18:47.753107 systemd-networkd[1352]: lxc_health: Link UP Feb 13 15:18:47.762669 systemd-networkd[1352]: lxc_health: Gained carrier Feb 13 15:18:48.178125 kernel: eth0: renamed from tmp577f9 Feb 13 15:18:48.182322 systemd-networkd[1352]: lxceae265f7e2f0: Link UP Feb 13 15:18:48.187372 systemd-networkd[1352]: lxceae265f7e2f0: Gained carrier Feb 13 15:18:48.216943 systemd-networkd[1352]: lxcf2d1dafb4000: Link UP Feb 13 15:18:48.231161 kernel: eth0: renamed from tmp50995 Feb 13 15:18:48.235706 systemd-networkd[1352]: lxcf2d1dafb4000: Gained carrier Feb 13 15:18:48.853287 systemd-networkd[1352]: cilium_vxlan: Gained IPv6LL Feb 13 15:18:49.237244 systemd-networkd[1352]: lxceae265f7e2f0: Gained IPv6LL Feb 13 15:18:49.685286 systemd-networkd[1352]: lxcf2d1dafb4000: Gained IPv6LL Feb 13 15:18:49.685586 systemd-networkd[1352]: lxc_health: Gained IPv6LL Feb 13 15:18:51.832285 containerd[1794]: time="2025-02-13T15:18:51.828988351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:51.832285 containerd[1794]: time="2025-02-13T15:18:51.829042551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:51.832285 containerd[1794]: time="2025-02-13T15:18:51.829058071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:51.832285 containerd[1794]: time="2025-02-13T15:18:51.829925988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:51.857082 containerd[1794]: time="2025-02-13T15:18:51.856063748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:51.857082 containerd[1794]: time="2025-02-13T15:18:51.856149708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:51.857082 containerd[1794]: time="2025-02-13T15:18:51.856165228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:51.857082 containerd[1794]: time="2025-02-13T15:18:51.856240907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:51.858255 systemd[1]: Started cri-containerd-509956081fda99156e418f7c443966857ecd7e46459d14f6f682e3e6b660f592.scope - libcontainer container 509956081fda99156e418f7c443966857ecd7e46459d14f6f682e3e6b660f592. Feb 13 15:18:51.892289 systemd[1]: Started cri-containerd-577f9986db1f46d060892a5c5b5a4994e90953d58902cb26ba370157e2024551.scope - libcontainer container 577f9986db1f46d060892a5c5b5a4994e90953d58902cb26ba370157e2024551. Feb 13 15:18:51.907462 containerd[1794]: time="2025-02-13T15:18:51.907271391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zgcp5,Uid:e19d761c-6738-4662-bbb1-4b029453119f,Namespace:kube-system,Attempt:0,} returns sandbox id \"509956081fda99156e418f7c443966857ecd7e46459d14f6f682e3e6b660f592\"" Feb 13 15:18:51.911160 containerd[1794]: time="2025-02-13T15:18:51.911045459Z" level=info msg="CreateContainer within sandbox \"509956081fda99156e418f7c443966857ecd7e46459d14f6f682e3e6b660f592\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:18:51.939111 containerd[1794]: time="2025-02-13T15:18:51.939015493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t6csz,Uid:28d07978-2cc0-4c29-8c2a-d97efb083ca5,Namespace:kube-system,Attempt:0,} returns sandbox id \"577f9986db1f46d060892a5c5b5a4994e90953d58902cb26ba370157e2024551\"" Feb 13 15:18:51.945002 containerd[1794]: time="2025-02-13T15:18:51.944811355Z" level=info msg="CreateContainer within sandbox \"577f9986db1f46d060892a5c5b5a4994e90953d58902cb26ba370157e2024551\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:18:51.958580 containerd[1794]: time="2025-02-13T15:18:51.958489833Z" level=info msg="CreateContainer within sandbox \"509956081fda99156e418f7c443966857ecd7e46459d14f6f682e3e6b660f592\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2a4b977e9030abffb3ce13b76b5d99768a3b6cb55cbab20573ed1911911db276\"" Feb 13 15:18:51.959195 containerd[1794]: time="2025-02-13T15:18:51.959150231Z" level=info msg="StartContainer for \"2a4b977e9030abffb3ce13b76b5d99768a3b6cb55cbab20573ed1911911db276\"" Feb 13 15:18:51.992247 systemd[1]: Started cri-containerd-2a4b977e9030abffb3ce13b76b5d99768a3b6cb55cbab20573ed1911911db276.scope - libcontainer container 2a4b977e9030abffb3ce13b76b5d99768a3b6cb55cbab20573ed1911911db276. Feb 13 15:18:51.994519 containerd[1794]: time="2025-02-13T15:18:51.994478163Z" level=info msg="CreateContainer within sandbox \"577f9986db1f46d060892a5c5b5a4994e90953d58902cb26ba370157e2024551\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"85ca826b10a004434faeb67e02758c17e68902d75a85af0f8056334af3bedd83\"" Feb 13 15:18:51.997281 containerd[1794]: time="2025-02-13T15:18:51.997227794Z" level=info msg="StartContainer for \"85ca826b10a004434faeb67e02758c17e68902d75a85af0f8056334af3bedd83\"" Feb 13 15:18:52.023264 systemd[1]: Started cri-containerd-85ca826b10a004434faeb67e02758c17e68902d75a85af0f8056334af3bedd83.scope - libcontainer container 85ca826b10a004434faeb67e02758c17e68902d75a85af0f8056334af3bedd83. Feb 13 15:18:52.032736 containerd[1794]: time="2025-02-13T15:18:52.032595566Z" level=info msg="StartContainer for \"2a4b977e9030abffb3ce13b76b5d99768a3b6cb55cbab20573ed1911911db276\" returns successfully" Feb 13 15:18:52.056635 containerd[1794]: time="2025-02-13T15:18:52.056572892Z" level=info msg="StartContainer for \"85ca826b10a004434faeb67e02758c17e68902d75a85af0f8056334af3bedd83\" returns successfully" Feb 13 15:18:52.489728 kubelet[3405]: I0213 15:18:52.489664 3405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-zgcp5" podStartSLOduration=22.489650321 podStartE2EDuration="22.489650321s" podCreationTimestamp="2025-02-13 15:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:52.487887647 +0000 UTC m=+28.235487213" watchObservedRunningTime="2025-02-13 15:18:52.489650321 +0000 UTC m=+28.237249887" Feb 13 15:18:52.506277 kubelet[3405]: I0213 15:18:52.505959 3405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-t6csz" podStartSLOduration=22.505941391 podStartE2EDuration="22.505941391s" podCreationTimestamp="2025-02-13 15:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:52.504968314 +0000 UTC m=+28.252567880" watchObservedRunningTime="2025-02-13 15:18:52.505941391 +0000 UTC m=+28.253540917" Feb 13 15:21:24.423291 systemd[1]: Started sshd@7-10.200.20.11:22-10.200.16.10:46542.service - OpenSSH per-connection server daemon (10.200.16.10:46542). Feb 13 15:21:24.916939 sshd[4806]: Accepted publickey for core from 10.200.16.10 port 46542 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:21:24.918859 sshd-session[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:24.924292 systemd-logind[1720]: New session 10 of user core. Feb 13 15:21:24.928261 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:21:25.352408 sshd[4808]: Connection closed by 10.200.16.10 port 46542 Feb 13 15:21:25.352974 sshd-session[4806]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:25.356344 systemd[1]: sshd@7-10.200.20.11:22-10.200.16.10:46542.service: Deactivated successfully. Feb 13 15:21:25.358278 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:21:25.360229 systemd-logind[1720]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:21:25.361857 systemd-logind[1720]: Removed session 10. Feb 13 15:21:30.436206 systemd[1]: Started sshd@8-10.200.20.11:22-10.200.16.10:51932.service - OpenSSH per-connection server daemon (10.200.16.10:51932). Feb 13 15:21:30.909021 sshd[4822]: Accepted publickey for core from 10.200.16.10 port 51932 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:21:30.910453 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:30.916896 systemd-logind[1720]: New session 11 of user core. Feb 13 15:21:30.920262 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:21:31.302781 sshd[4827]: Connection closed by 10.200.16.10 port 51932 Feb 13 15:21:31.303410 sshd-session[4822]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:31.307975 systemd-logind[1720]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:21:31.309011 systemd[1]: sshd@8-10.200.20.11:22-10.200.16.10:51932.service: Deactivated successfully. Feb 13 15:21:31.312511 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:21:31.314568 systemd-logind[1720]: Removed session 11. Feb 13 15:21:36.388168 systemd[1]: Started sshd@9-10.200.20.11:22-10.200.16.10:51938.service - OpenSSH per-connection server daemon (10.200.16.10:51938). Feb 13 15:21:36.848506 sshd[4841]: Accepted publickey for core from 10.200.16.10 port 51938 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:21:36.849782 sshd-session[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:36.854939 systemd-logind[1720]: New session 12 of user core. Feb 13 15:21:36.859249 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:21:37.238283 sshd[4843]: Connection closed by 10.200.16.10 port 51938 Feb 13 15:21:37.237712 sshd-session[4841]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:37.241135 systemd[1]: sshd@9-10.200.20.11:22-10.200.16.10:51938.service: Deactivated successfully. Feb 13 15:21:37.243336 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:21:37.245128 systemd-logind[1720]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:21:37.246072 systemd-logind[1720]: Removed session 12. Feb 13 15:21:42.332384 systemd[1]: Started sshd@10-10.200.20.11:22-10.200.16.10:52676.service - OpenSSH per-connection server daemon (10.200.16.10:52676). Feb 13 15:21:42.823640 sshd[4860]: Accepted publickey for core from 10.200.16.10 port 52676 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:21:42.824948 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:42.829043 systemd-logind[1720]: New session 13 of user core. Feb 13 15:21:42.839257 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:21:43.235141 sshd[4862]: Connection closed by 10.200.16.10 port 52676 Feb 13 15:21:43.234737 sshd-session[4860]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:43.239041 systemd[1]: sshd@10-10.200.20.11:22-10.200.16.10:52676.service: Deactivated successfully. Feb 13 15:21:43.241966 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:21:43.243199 systemd-logind[1720]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:21:43.244473 systemd-logind[1720]: Removed session 13. Feb 13 15:21:43.316374 systemd[1]: Started sshd@11-10.200.20.11:22-10.200.16.10:52690.service - OpenSSH per-connection server daemon (10.200.16.10:52690). Feb 13 15:21:43.751215 sshd[4874]: Accepted publickey for core from 10.200.16.10 port 52690 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:21:43.752559 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:43.756886 systemd-logind[1720]: New session 14 of user core. Feb 13 15:21:43.761279 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:21:44.175155 sshd[4876]: Connection closed by 10.200.16.10 port 52690 Feb 13 15:21:44.175741 sshd-session[4874]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:44.179042 systemd[1]: sshd@11-10.200.20.11:22-10.200.16.10:52690.service: Deactivated successfully. Feb 13 15:21:44.181834 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:21:44.184554 systemd-logind[1720]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:21:44.186002 systemd-logind[1720]: Removed session 14. Feb 13 15:21:44.270213 systemd[1]: Started sshd@12-10.200.20.11:22-10.200.16.10:52704.service - OpenSSH per-connection server daemon (10.200.16.10:52704). Feb 13 15:21:44.719720 sshd[4886]: Accepted publickey for core from 10.200.16.10 port 52704 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:21:44.721085 sshd-session[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:44.725350 systemd-logind[1720]: New session 15 of user core. Feb 13 15:21:44.731306 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:21:45.111426 sshd[4888]: Connection closed by 10.200.16.10 port 52704 Feb 13 15:21:45.112111 sshd-session[4886]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:45.115859 systemd[1]: sshd@12-10.200.20.11:22-10.200.16.10:52704.service: Deactivated successfully. Feb 13 15:21:45.118695 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:21:45.119966 systemd-logind[1720]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:21:45.122125 systemd-logind[1720]: Removed session 15. Feb 13 15:21:50.196348 systemd[1]: Started sshd@13-10.200.20.11:22-10.200.16.10:48840.service - OpenSSH per-connection server daemon (10.200.16.10:48840). Feb 13 15:21:50.656455 sshd[4899]: Accepted publickey for core from 10.200.16.10 port 48840 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:21:50.657747 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:50.663276 systemd-logind[1720]: New session 16 of user core. Feb 13 15:21:50.666266 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:21:51.068135 sshd[4901]: Connection closed by 10.200.16.10 port 48840 Feb 13 15:21:51.068697 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:51.072130 systemd[1]: sshd@13-10.200.20.11:22-10.200.16.10:48840.service: Deactivated successfully. Feb 13 15:21:51.074003 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:21:51.075767 systemd-logind[1720]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:21:51.076835 systemd-logind[1720]: Removed session 16. Feb 13 15:21:56.167421 systemd[1]: Started sshd@14-10.200.20.11:22-10.200.16.10:48852.service - OpenSSH per-connection server daemon (10.200.16.10:48852). Feb 13 15:21:56.659007 sshd[4914]: Accepted publickey for core from 10.200.16.10 port 48852 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:21:56.660377 sshd-session[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:56.664803 systemd-logind[1720]: New session 17 of user core. Feb 13 15:21:56.669298 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:21:57.066018 sshd[4916]: Connection closed by 10.200.16.10 port 48852 Feb 13 15:21:57.066682 sshd-session[4914]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:57.070977 systemd[1]: sshd@14-10.200.20.11:22-10.200.16.10:48852.service: Deactivated successfully. Feb 13 15:21:57.073332 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:21:57.074481 systemd-logind[1720]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:21:57.075608 systemd-logind[1720]: Removed session 17. Feb 13 15:21:57.159575 systemd[1]: Started sshd@15-10.200.20.11:22-10.200.16.10:48856.service - OpenSSH per-connection server daemon (10.200.16.10:48856). Feb 13 15:21:57.650827 sshd[4928]: Accepted publickey for core from 10.200.16.10 port 48856 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:21:57.652273 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:57.656456 systemd-logind[1720]: New session 18 of user core. Feb 13 15:21:57.660282 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:21:58.086116 sshd[4930]: Connection closed by 10.200.16.10 port 48856 Feb 13 15:21:58.086764 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:58.090421 systemd[1]: sshd@15-10.200.20.11:22-10.200.16.10:48856.service: Deactivated successfully. Feb 13 15:21:58.092664 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:21:58.094276 systemd-logind[1720]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:21:58.095816 systemd-logind[1720]: Removed session 18. Feb 13 15:21:58.167380 systemd[1]: Started sshd@16-10.200.20.11:22-10.200.16.10:48866.service - OpenSSH per-connection server daemon (10.200.16.10:48866). Feb 13 15:21:58.623513 sshd[4940]: Accepted publickey for core from 10.200.16.10 port 48866 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:21:58.624831 sshd-session[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:58.629896 systemd-logind[1720]: New session 19 of user core. Feb 13 15:21:58.635284 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:22:00.215794 sshd[4942]: Connection closed by 10.200.16.10 port 48866 Feb 13 15:22:00.219308 sshd-session[4940]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:00.223310 systemd[1]: sshd@16-10.200.20.11:22-10.200.16.10:48866.service: Deactivated successfully. Feb 13 15:22:00.225679 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:22:00.226549 systemd-logind[1720]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:22:00.227606 systemd-logind[1720]: Removed session 19. Feb 13 15:22:00.296413 systemd[1]: Started sshd@17-10.200.20.11:22-10.200.16.10:39118.service - OpenSSH per-connection server daemon (10.200.16.10:39118). Feb 13 15:22:00.714527 sshd[4959]: Accepted publickey for core from 10.200.16.10 port 39118 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:22:00.716351 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:00.720700 systemd-logind[1720]: New session 20 of user core. Feb 13 15:22:00.728261 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:22:01.189450 sshd[4962]: Connection closed by 10.200.16.10 port 39118 Feb 13 15:22:01.189884 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:01.193185 systemd-logind[1720]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:22:01.194031 systemd[1]: sshd@17-10.200.20.11:22-10.200.16.10:39118.service: Deactivated successfully. Feb 13 15:22:01.195821 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:22:01.198241 systemd-logind[1720]: Removed session 20. Feb 13 15:22:01.282006 systemd[1]: Started sshd@18-10.200.20.11:22-10.200.16.10:39130.service - OpenSSH per-connection server daemon (10.200.16.10:39130). Feb 13 15:22:01.770499 sshd[4971]: Accepted publickey for core from 10.200.16.10 port 39130 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:22:01.772277 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:01.776927 systemd-logind[1720]: New session 21 of user core. Feb 13 15:22:01.792331 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:22:02.174176 sshd[4975]: Connection closed by 10.200.16.10 port 39130 Feb 13 15:22:02.175285 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:02.178865 systemd[1]: sshd@18-10.200.20.11:22-10.200.16.10:39130.service: Deactivated successfully. Feb 13 15:22:02.180595 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:22:02.181618 systemd-logind[1720]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:22:02.182758 systemd-logind[1720]: Removed session 21. Feb 13 15:22:07.255397 systemd[1]: Started sshd@19-10.200.20.11:22-10.200.16.10:39138.service - OpenSSH per-connection server daemon (10.200.16.10:39138). Feb 13 15:22:07.687713 sshd[4990]: Accepted publickey for core from 10.200.16.10 port 39138 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:22:07.689117 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:07.694500 systemd-logind[1720]: New session 22 of user core. Feb 13 15:22:07.701374 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:22:08.061554 sshd[4992]: Connection closed by 10.200.16.10 port 39138 Feb 13 15:22:08.062155 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:08.066339 systemd[1]: sshd@19-10.200.20.11:22-10.200.16.10:39138.service: Deactivated successfully. Feb 13 15:22:08.069411 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:22:08.070574 systemd-logind[1720]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:22:08.072271 systemd-logind[1720]: Removed session 22. Feb 13 15:22:13.151374 systemd[1]: Started sshd@20-10.200.20.11:22-10.200.16.10:38006.service - OpenSSH per-connection server daemon (10.200.16.10:38006). Feb 13 15:22:13.599007 sshd[5003]: Accepted publickey for core from 10.200.16.10 port 38006 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:22:13.600387 sshd-session[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:13.605459 systemd-logind[1720]: New session 23 of user core. Feb 13 15:22:13.608269 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:22:13.979205 sshd[5005]: Connection closed by 10.200.16.10 port 38006 Feb 13 15:22:13.979734 sshd-session[5003]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:13.982692 systemd[1]: sshd@20-10.200.20.11:22-10.200.16.10:38006.service: Deactivated successfully. Feb 13 15:22:13.984484 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:22:13.986520 systemd-logind[1720]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:22:13.987669 systemd-logind[1720]: Removed session 23. Feb 13 15:22:19.053372 systemd[1]: Started sshd@21-10.200.20.11:22-10.200.16.10:46534.service - OpenSSH per-connection server daemon (10.200.16.10:46534). Feb 13 15:22:19.468099 sshd[5017]: Accepted publickey for core from 10.200.16.10 port 46534 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:22:19.469466 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:19.474553 systemd-logind[1720]: New session 24 of user core. Feb 13 15:22:19.484632 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:22:19.829400 sshd[5019]: Connection closed by 10.200.16.10 port 46534 Feb 13 15:22:19.830338 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:19.833944 systemd[1]: sshd@21-10.200.20.11:22-10.200.16.10:46534.service: Deactivated successfully. Feb 13 15:22:19.836032 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:22:19.837737 systemd-logind[1720]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:22:19.838750 systemd-logind[1720]: Removed session 24. Feb 13 15:22:19.909444 systemd[1]: Started sshd@22-10.200.20.11:22-10.200.16.10:46550.service - OpenSSH per-connection server daemon (10.200.16.10:46550). Feb 13 15:22:20.322919 sshd[5030]: Accepted publickey for core from 10.200.16.10 port 46550 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:22:20.324511 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:20.329396 systemd-logind[1720]: New session 25 of user core. Feb 13 15:22:20.333255 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:22:22.310812 containerd[1794]: time="2025-02-13T15:22:22.310756865Z" level=info msg="StopContainer for \"ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92\" with timeout 30 (s)" Feb 13 15:22:22.314527 containerd[1794]: time="2025-02-13T15:22:22.314367818Z" level=info msg="Stop container \"ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92\" with signal terminated" Feb 13 15:22:22.336832 systemd[1]: cri-containerd-ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92.scope: Deactivated successfully. Feb 13 15:22:22.341828 containerd[1794]: time="2025-02-13T15:22:22.341533522Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:22:22.351971 containerd[1794]: time="2025-02-13T15:22:22.351933060Z" level=info msg="StopContainer for \"3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540\" with timeout 2 (s)" Feb 13 15:22:22.352693 containerd[1794]: time="2025-02-13T15:22:22.352583179Z" level=info msg="Stop container \"3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540\" with signal terminated" Feb 13 15:22:22.361029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92-rootfs.mount: Deactivated successfully. Feb 13 15:22:22.363318 systemd-networkd[1352]: lxc_health: Link DOWN Feb 13 15:22:22.363325 systemd-networkd[1352]: lxc_health: Lost carrier Feb 13 15:22:22.383615 systemd[1]: cri-containerd-3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540.scope: Deactivated successfully. Feb 13 15:22:22.383971 systemd[1]: cri-containerd-3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540.scope: Consumed 6.442s CPU time, 125.6M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 15:22:22.404578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540-rootfs.mount: Deactivated successfully. Feb 13 15:22:22.450752 containerd[1794]: time="2025-02-13T15:22:22.450529576Z" level=info msg="shim disconnected" id=3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540 namespace=k8s.io Feb 13 15:22:22.450752 containerd[1794]: time="2025-02-13T15:22:22.450593536Z" level=warning msg="cleaning up after shim disconnected" id=3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540 namespace=k8s.io Feb 13 15:22:22.450752 containerd[1794]: time="2025-02-13T15:22:22.450601216Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:22:22.451183 containerd[1794]: time="2025-02-13T15:22:22.450799736Z" level=info msg="shim disconnected" id=ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92 namespace=k8s.io Feb 13 15:22:22.451183 containerd[1794]: time="2025-02-13T15:22:22.450915535Z" level=warning msg="cleaning up after shim disconnected" id=ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92 namespace=k8s.io Feb 13 15:22:22.451183 containerd[1794]: time="2025-02-13T15:22:22.450937975Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:22:22.468575 containerd[1794]: time="2025-02-13T15:22:22.468463859Z" level=info msg="StopContainer for \"3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540\" returns successfully" Feb 13 15:22:22.470442 containerd[1794]: time="2025-02-13T15:22:22.469143338Z" level=info msg="StopPodSandbox for \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\"" Feb 13 15:22:22.470442 containerd[1794]: time="2025-02-13T15:22:22.469188697Z" level=info msg="Container to stop \"ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:22:22.470442 containerd[1794]: time="2025-02-13T15:22:22.469199817Z" level=info msg="Container to stop \"d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:22:22.470442 containerd[1794]: time="2025-02-13T15:22:22.469207937Z" level=info msg="Container to stop \"952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:22:22.470442 containerd[1794]: time="2025-02-13T15:22:22.469216657Z" level=info msg="Container to stop \"98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:22:22.470442 containerd[1794]: time="2025-02-13T15:22:22.469224497Z" level=info msg="Container to stop \"3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:22:22.472271 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6-shm.mount: Deactivated successfully. Feb 13 15:22:22.473902 containerd[1794]: time="2025-02-13T15:22:22.473695168Z" level=info msg="StopContainer for \"ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92\" returns successfully" Feb 13 15:22:22.474434 containerd[1794]: time="2025-02-13T15:22:22.474311647Z" level=info msg="StopPodSandbox for \"268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31\"" Feb 13 15:22:22.474434 containerd[1794]: time="2025-02-13T15:22:22.474352087Z" level=info msg="Container to stop \"ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:22:22.477462 systemd[1]: cri-containerd-2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6.scope: Deactivated successfully. Feb 13 15:22:22.493969 systemd[1]: cri-containerd-268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31.scope: Deactivated successfully. Feb 13 15:22:22.533858 containerd[1794]: time="2025-02-13T15:22:22.533516244Z" level=info msg="shim disconnected" id=268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31 namespace=k8s.io Feb 13 15:22:22.533858 containerd[1794]: time="2025-02-13T15:22:22.533746124Z" level=warning msg="cleaning up after shim disconnected" id=268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31 namespace=k8s.io Feb 13 15:22:22.534057 containerd[1794]: time="2025-02-13T15:22:22.533987323Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:22:22.534108 containerd[1794]: time="2025-02-13T15:22:22.534064603Z" level=info msg="shim disconnected" id=2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6 namespace=k8s.io Feb 13 15:22:22.536191 containerd[1794]: time="2025-02-13T15:22:22.534429523Z" level=warning msg="cleaning up after shim disconnected" id=2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6 namespace=k8s.io Feb 13 15:22:22.536191 containerd[1794]: time="2025-02-13T15:22:22.534448562Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:22:22.548706 containerd[1794]: time="2025-02-13T15:22:22.548659333Z" level=info msg="TearDown network for sandbox \"268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31\" successfully" Feb 13 15:22:22.548706 containerd[1794]: time="2025-02-13T15:22:22.548696813Z" level=info msg="StopPodSandbox for \"268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31\" returns successfully" Feb 13 15:22:22.552694 containerd[1794]: time="2025-02-13T15:22:22.552593565Z" level=info msg="TearDown network for sandbox \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\" successfully" Feb 13 15:22:22.552694 containerd[1794]: time="2025-02-13T15:22:22.552623085Z" level=info msg="StopPodSandbox for \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\" returns successfully" Feb 13 15:22:22.580701 kubelet[3405]: I0213 15:22:22.579941 3405 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94a4df6d-5281-4ccd-962b-6d5840557a94-clustermesh-secrets\") pod \"94a4df6d-5281-4ccd-962b-6d5840557a94\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " Feb 13 15:22:22.581723 kubelet[3405]: I0213 15:22:22.581230 3405 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-cilium-run\") pod \"94a4df6d-5281-4ccd-962b-6d5840557a94\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " Feb 13 15:22:22.581723 kubelet[3405]: I0213 15:22:22.581262 3405 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkw8v\" (UniqueName: \"kubernetes.io/projected/94a4df6d-5281-4ccd-962b-6d5840557a94-kube-api-access-gkw8v\") pod \"94a4df6d-5281-4ccd-962b-6d5840557a94\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " Feb 13 15:22:22.581723 kubelet[3405]: I0213 15:22:22.581290 3405 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-host-proc-sys-net\") pod \"94a4df6d-5281-4ccd-962b-6d5840557a94\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " Feb 13 15:22:22.581723 kubelet[3405]: I0213 15:22:22.581306 3405 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-bpf-maps\") pod \"94a4df6d-5281-4ccd-962b-6d5840557a94\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " Feb 13 15:22:22.581723 kubelet[3405]: I0213 15:22:22.581324 3405 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4a415a5-a9c1-41a7-941c-2ac929d78ff7-cilium-config-path\") pod \"e4a415a5-a9c1-41a7-941c-2ac929d78ff7\" (UID: \"e4a415a5-a9c1-41a7-941c-2ac929d78ff7\") " Feb 13 15:22:22.581723 kubelet[3405]: I0213 15:22:22.581341 3405 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94a4df6d-5281-4ccd-962b-6d5840557a94-cilium-config-path\") pod \"94a4df6d-5281-4ccd-962b-6d5840557a94\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " Feb 13 15:22:22.581882 kubelet[3405]: I0213 15:22:22.581358 3405 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94a4df6d-5281-4ccd-962b-6d5840557a94-hubble-tls\") pod \"94a4df6d-5281-4ccd-962b-6d5840557a94\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " Feb 13 15:22:22.581882 kubelet[3405]: I0213 15:22:22.581371 3405 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-hostproc\") pod \"94a4df6d-5281-4ccd-962b-6d5840557a94\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " Feb 13 15:22:22.581882 kubelet[3405]: I0213 15:22:22.581387 3405 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-host-proc-sys-kernel\") pod \"94a4df6d-5281-4ccd-962b-6d5840557a94\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " Feb 13 15:22:22.581882 kubelet[3405]: I0213 15:22:22.581405 3405 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-cni-path\") pod \"94a4df6d-5281-4ccd-962b-6d5840557a94\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " Feb 13 15:22:22.581882 kubelet[3405]: I0213 15:22:22.581419 3405 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-xtables-lock\") pod \"94a4df6d-5281-4ccd-962b-6d5840557a94\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " Feb 13 15:22:22.581882 kubelet[3405]: I0213 15:22:22.581436 3405 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpwkt\" (UniqueName: \"kubernetes.io/projected/e4a415a5-a9c1-41a7-941c-2ac929d78ff7-kube-api-access-kpwkt\") pod \"e4a415a5-a9c1-41a7-941c-2ac929d78ff7\" (UID: \"e4a415a5-a9c1-41a7-941c-2ac929d78ff7\") " Feb 13 15:22:22.582003 kubelet[3405]: I0213 15:22:22.581451 3405 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-cilium-cgroup\") pod \"94a4df6d-5281-4ccd-962b-6d5840557a94\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " Feb 13 15:22:22.582003 kubelet[3405]: I0213 15:22:22.581464 3405 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-lib-modules\") pod \"94a4df6d-5281-4ccd-962b-6d5840557a94\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " Feb 13 15:22:22.582003 kubelet[3405]: I0213 15:22:22.581480 3405 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-etc-cni-netd\") pod \"94a4df6d-5281-4ccd-962b-6d5840557a94\" (UID: \"94a4df6d-5281-4ccd-962b-6d5840557a94\") " Feb 13 15:22:22.582003 kubelet[3405]: I0213 15:22:22.581531 3405 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "94a4df6d-5281-4ccd-962b-6d5840557a94" (UID: "94a4df6d-5281-4ccd-962b-6d5840557a94"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:22.582003 kubelet[3405]: I0213 15:22:22.581567 3405 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "94a4df6d-5281-4ccd-962b-6d5840557a94" (UID: "94a4df6d-5281-4ccd-962b-6d5840557a94"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:22.583462 kubelet[3405]: I0213 15:22:22.583427 3405 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "94a4df6d-5281-4ccd-962b-6d5840557a94" (UID: "94a4df6d-5281-4ccd-962b-6d5840557a94"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:22.583579 kubelet[3405]: I0213 15:22:22.583565 3405 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "94a4df6d-5281-4ccd-962b-6d5840557a94" (UID: "94a4df6d-5281-4ccd-962b-6d5840557a94"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:22.584407 kubelet[3405]: I0213 15:22:22.584384 3405 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a4df6d-5281-4ccd-962b-6d5840557a94-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "94a4df6d-5281-4ccd-962b-6d5840557a94" (UID: "94a4df6d-5281-4ccd-962b-6d5840557a94"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:22:22.584603 kubelet[3405]: I0213 15:22:22.584585 3405 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a4df6d-5281-4ccd-962b-6d5840557a94-kube-api-access-gkw8v" (OuterVolumeSpecName: "kube-api-access-gkw8v") pod "94a4df6d-5281-4ccd-962b-6d5840557a94" (UID: "94a4df6d-5281-4ccd-962b-6d5840557a94"). InnerVolumeSpecName "kube-api-access-gkw8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:22:22.584708 kubelet[3405]: I0213 15:22:22.584695 3405 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-cni-path" (OuterVolumeSpecName: "cni-path") pod "94a4df6d-5281-4ccd-962b-6d5840557a94" (UID: "94a4df6d-5281-4ccd-962b-6d5840557a94"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:22.585418 kubelet[3405]: I0213 15:22:22.585387 3405 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4a415a5-a9c1-41a7-941c-2ac929d78ff7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e4a415a5-a9c1-41a7-941c-2ac929d78ff7" (UID: "e4a415a5-a9c1-41a7-941c-2ac929d78ff7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:22:22.585739 kubelet[3405]: I0213 15:22:22.585708 3405 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "94a4df6d-5281-4ccd-962b-6d5840557a94" (UID: "94a4df6d-5281-4ccd-962b-6d5840557a94"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:22.587813 kubelet[3405]: I0213 15:22:22.587775 3405 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94a4df6d-5281-4ccd-962b-6d5840557a94-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "94a4df6d-5281-4ccd-962b-6d5840557a94" (UID: "94a4df6d-5281-4ccd-962b-6d5840557a94"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:22:22.587937 kubelet[3405]: I0213 15:22:22.587906 3405 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "94a4df6d-5281-4ccd-962b-6d5840557a94" (UID: "94a4df6d-5281-4ccd-962b-6d5840557a94"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:22.587984 kubelet[3405]: I0213 15:22:22.587941 3405 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "94a4df6d-5281-4ccd-962b-6d5840557a94" (UID: "94a4df6d-5281-4ccd-962b-6d5840557a94"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:22.588046 kubelet[3405]: I0213 15:22:22.588020 3405 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a415a5-a9c1-41a7-941c-2ac929d78ff7-kube-api-access-kpwkt" (OuterVolumeSpecName: "kube-api-access-kpwkt") pod "e4a415a5-a9c1-41a7-941c-2ac929d78ff7" (UID: "e4a415a5-a9c1-41a7-941c-2ac929d78ff7"). InnerVolumeSpecName "kube-api-access-kpwkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:22:22.588641 kubelet[3405]: I0213 15:22:22.588398 3405 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-hostproc" (OuterVolumeSpecName: "hostproc") pod "94a4df6d-5281-4ccd-962b-6d5840557a94" (UID: "94a4df6d-5281-4ccd-962b-6d5840557a94"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:22.588641 kubelet[3405]: I0213 15:22:22.588432 3405 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "94a4df6d-5281-4ccd-962b-6d5840557a94" (UID: "94a4df6d-5281-4ccd-962b-6d5840557a94"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:22.592592 kubelet[3405]: I0213 15:22:22.592541 3405 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a4df6d-5281-4ccd-962b-6d5840557a94-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "94a4df6d-5281-4ccd-962b-6d5840557a94" (UID: "94a4df6d-5281-4ccd-962b-6d5840557a94"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:22:22.681904 kubelet[3405]: I0213 15:22:22.681703 3405 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-xtables-lock\") on node \"ci-4230.0.1-a-0ecc5c528f\" DevicePath \"\"" Feb 13 15:22:22.681904 kubelet[3405]: I0213 15:22:22.681750 3405 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kpwkt\" (UniqueName: \"kubernetes.io/projected/e4a415a5-a9c1-41a7-941c-2ac929d78ff7-kube-api-access-kpwkt\") on node \"ci-4230.0.1-a-0ecc5c528f\" DevicePath \"\"" Feb 13 15:22:22.681904 kubelet[3405]: I0213 15:22:22.681764 3405 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-cilium-cgroup\") on node \"ci-4230.0.1-a-0ecc5c528f\" DevicePath \"\"" Feb 13 15:22:22.681904 kubelet[3405]: I0213 15:22:22.681774 3405 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-lib-modules\") on node \"ci-4230.0.1-a-0ecc5c528f\" DevicePath \"\"" Feb 13 15:22:22.681904 kubelet[3405]: I0213 15:22:22.681786 3405 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-etc-cni-netd\") on node \"ci-4230.0.1-a-0ecc5c528f\" DevicePath \"\"" Feb 13 15:22:22.681904 kubelet[3405]: I0213 15:22:22.681794 3405 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94a4df6d-5281-4ccd-962b-6d5840557a94-clustermesh-secrets\") on node \"ci-4230.0.1-a-0ecc5c528f\" DevicePath \"\"" Feb 13 15:22:22.681904 kubelet[3405]: I0213 15:22:22.681802 3405 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-cilium-run\") on node \"ci-4230.0.1-a-0ecc5c528f\" DevicePath \"\"" Feb 13 15:22:22.681904 kubelet[3405]: I0213 15:22:22.681811 3405 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4a415a5-a9c1-41a7-941c-2ac929d78ff7-cilium-config-path\") on node \"ci-4230.0.1-a-0ecc5c528f\" DevicePath \"\"" Feb 13 15:22:22.682286 kubelet[3405]: I0213 15:22:22.681819 3405 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gkw8v\" (UniqueName: \"kubernetes.io/projected/94a4df6d-5281-4ccd-962b-6d5840557a94-kube-api-access-gkw8v\") on node \"ci-4230.0.1-a-0ecc5c528f\" DevicePath \"\"" Feb 13 15:22:22.682286 kubelet[3405]: I0213 15:22:22.681828 3405 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-host-proc-sys-net\") on node \"ci-4230.0.1-a-0ecc5c528f\" DevicePath \"\"" Feb 13 15:22:22.682286 kubelet[3405]: I0213 15:22:22.681838 3405 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-bpf-maps\") on node \"ci-4230.0.1-a-0ecc5c528f\" DevicePath \"\"" Feb 13 15:22:22.682286 kubelet[3405]: I0213 15:22:22.681846 3405 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94a4df6d-5281-4ccd-962b-6d5840557a94-cilium-config-path\") on node \"ci-4230.0.1-a-0ecc5c528f\" DevicePath \"\"" Feb 13 15:22:22.682286 kubelet[3405]: I0213 15:22:22.681855 3405 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94a4df6d-5281-4ccd-962b-6d5840557a94-hubble-tls\") on node \"ci-4230.0.1-a-0ecc5c528f\" DevicePath \"\"" Feb 13 15:22:22.682286 kubelet[3405]: I0213 15:22:22.681865 3405 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-hostproc\") on node \"ci-4230.0.1-a-0ecc5c528f\" DevicePath \"\"" Feb 13 15:22:22.682286 kubelet[3405]: I0213 15:22:22.681873 3405 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-host-proc-sys-kernel\") on node \"ci-4230.0.1-a-0ecc5c528f\" DevicePath \"\"" Feb 13 15:22:22.682286 kubelet[3405]: I0213 15:22:22.681883 3405 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94a4df6d-5281-4ccd-962b-6d5840557a94-cni-path\") on node \"ci-4230.0.1-a-0ecc5c528f\" DevicePath \"\"" Feb 13 15:22:22.857966 kubelet[3405]: I0213 15:22:22.857618 3405 scope.go:117] "RemoveContainer" containerID="ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92" Feb 13 15:22:22.861947 containerd[1794]: time="2025-02-13T15:22:22.861888685Z" level=info msg="RemoveContainer for \"ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92\"" Feb 13 15:22:22.864543 systemd[1]: Removed slice kubepods-besteffort-pode4a415a5_a9c1_41a7_941c_2ac929d78ff7.slice - libcontainer container kubepods-besteffort-pode4a415a5_a9c1_41a7_941c_2ac929d78ff7.slice. Feb 13 15:22:22.872078 systemd[1]: Removed slice kubepods-burstable-pod94a4df6d_5281_4ccd_962b_6d5840557a94.slice - libcontainer container kubepods-burstable-pod94a4df6d_5281_4ccd_962b_6d5840557a94.slice. Feb 13 15:22:22.872193 systemd[1]: kubepods-burstable-pod94a4df6d_5281_4ccd_962b_6d5840557a94.slice: Consumed 6.510s CPU time, 126M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 15:22:22.880147 containerd[1794]: time="2025-02-13T15:22:22.880064048Z" level=info msg="RemoveContainer for \"ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92\" returns successfully" Feb 13 15:22:22.880454 kubelet[3405]: I0213 15:22:22.880429 3405 scope.go:117] "RemoveContainer" containerID="ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92" Feb 13 15:22:22.880791 containerd[1794]: time="2025-02-13T15:22:22.880697486Z" level=error msg="ContainerStatus for \"ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92\": not found" Feb 13 15:22:22.880885 kubelet[3405]: E0213 15:22:22.880843 3405 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92\": not found" containerID="ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92" Feb 13 15:22:22.880960 kubelet[3405]: I0213 15:22:22.880874 3405 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92"} err="failed to get container status \"ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92\": rpc error: code = NotFound desc = an error occurred when try to find container \"ecefabbc9e6b528039718356c595e2a1ac39c3dac647b5f8969ced662fef9e92\": not found" Feb 13 15:22:22.880960 kubelet[3405]: I0213 15:22:22.880957 3405 scope.go:117] "RemoveContainer" containerID="3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540" Feb 13 15:22:22.883529 containerd[1794]: time="2025-02-13T15:22:22.883471400Z" level=info msg="RemoveContainer for \"3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540\"" Feb 13 15:22:22.893490 containerd[1794]: time="2025-02-13T15:22:22.893443300Z" level=info msg="RemoveContainer for \"3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540\" returns successfully" Feb 13 15:22:22.894506 kubelet[3405]: I0213 15:22:22.894446 3405 scope.go:117] "RemoveContainer" containerID="ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b" Feb 13 15:22:22.897156 containerd[1794]: time="2025-02-13T15:22:22.897044892Z" level=info msg="RemoveContainer for \"ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b\"" Feb 13 15:22:22.906020 containerd[1794]: time="2025-02-13T15:22:22.905977074Z" level=info msg="RemoveContainer for \"ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b\" returns successfully" Feb 13 15:22:22.906481 kubelet[3405]: I0213 15:22:22.906446 3405 scope.go:117] "RemoveContainer" containerID="98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494" Feb 13 15:22:22.907957 containerd[1794]: time="2025-02-13T15:22:22.907781830Z" level=info msg="RemoveContainer for \"98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494\"" Feb 13 15:22:22.914542 containerd[1794]: time="2025-02-13T15:22:22.914497856Z" level=info msg="RemoveContainer for \"98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494\" returns successfully" Feb 13 15:22:22.915014 kubelet[3405]: I0213 15:22:22.914878 3405 scope.go:117] "RemoveContainer" containerID="952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4" Feb 13 15:22:22.916143 containerd[1794]: time="2025-02-13T15:22:22.916118413Z" level=info msg="RemoveContainer for \"952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4\"" Feb 13 15:22:22.923412 containerd[1794]: time="2025-02-13T15:22:22.923326638Z" level=info msg="RemoveContainer for \"952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4\" returns successfully" Feb 13 15:22:22.923605 kubelet[3405]: I0213 15:22:22.923577 3405 scope.go:117] "RemoveContainer" containerID="d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571" Feb 13 15:22:22.924790 containerd[1794]: time="2025-02-13T15:22:22.924760355Z" level=info msg="RemoveContainer for \"d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571\"" Feb 13 15:22:22.932097 containerd[1794]: time="2025-02-13T15:22:22.932043300Z" level=info msg="RemoveContainer for \"d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571\" returns successfully" Feb 13 15:22:22.932484 kubelet[3405]: I0213 15:22:22.932372 3405 scope.go:117] "RemoveContainer" containerID="3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540" Feb 13 15:22:22.932738 containerd[1794]: time="2025-02-13T15:22:22.932698579Z" level=error msg="ContainerStatus for \"3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540\": not found" Feb 13 15:22:22.932901 kubelet[3405]: E0213 15:22:22.932868 3405 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540\": not found" containerID="3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540" Feb 13 15:22:22.932969 kubelet[3405]: I0213 15:22:22.932940 3405 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540"} err="failed to get container status \"3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ac423a187349f8f0f33825c690767efcc9f7f4429d74c786636eb41aedbc540\": not found" Feb 13 15:22:22.932997 kubelet[3405]: I0213 15:22:22.932970 3405 scope.go:117] "RemoveContainer" containerID="ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b" Feb 13 15:22:22.933208 containerd[1794]: time="2025-02-13T15:22:22.933173258Z" level=error msg="ContainerStatus for \"ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b\": not found" Feb 13 15:22:22.933338 kubelet[3405]: E0213 15:22:22.933300 3405 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b\": not found" containerID="ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b" Feb 13 15:22:22.933370 kubelet[3405]: I0213 15:22:22.933340 3405 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b"} err="failed to get container status \"ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ecf771c1b9953f1b6dd44654352139b3ba590942e893c153427ff2b5e020a24b\": not found" Feb 13 15:22:22.933370 kubelet[3405]: I0213 15:22:22.933357 3405 scope.go:117] "RemoveContainer" containerID="98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494" Feb 13 15:22:22.933641 containerd[1794]: time="2025-02-13T15:22:22.933609097Z" level=error msg="ContainerStatus for \"98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494\": not found" Feb 13 15:22:22.933763 kubelet[3405]: E0213 15:22:22.933737 3405 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494\": not found" containerID="98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494" Feb 13 15:22:22.933846 kubelet[3405]: I0213 15:22:22.933822 3405 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494"} err="failed to get container status \"98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494\": rpc error: code = NotFound desc = an error occurred when try to find container \"98f64bb43d20a116d3e3959a877fb26418f7b620c3a9d2c1f278b35670f2d494\": not found" Feb 13 15:22:22.933873 kubelet[3405]: I0213 15:22:22.933846 3405 scope.go:117] "RemoveContainer" containerID="952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4" Feb 13 15:22:22.934078 containerd[1794]: time="2025-02-13T15:22:22.934045136Z" level=error msg="ContainerStatus for \"952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4\": not found" Feb 13 15:22:22.934194 kubelet[3405]: E0213 15:22:22.934169 3405 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4\": not found" containerID="952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4" Feb 13 15:22:22.934232 kubelet[3405]: I0213 15:22:22.934195 3405 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4"} err="failed to get container status \"952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4\": rpc error: code = NotFound desc = an error occurred when try to find container \"952ebbcfc730dc8200f9beeb0e6645b82f4666464073ffdbd4e2f406bf2fdfa4\": not found" Feb 13 15:22:22.934232 kubelet[3405]: I0213 15:22:22.934212 3405 scope.go:117] "RemoveContainer" containerID="d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571" Feb 13 15:22:22.934389 containerd[1794]: time="2025-02-13T15:22:22.934356015Z" level=error msg="ContainerStatus for \"d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571\": not found" Feb 13 15:22:22.934526 kubelet[3405]: E0213 15:22:22.934502 3405 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571\": not found" containerID="d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571" Feb 13 15:22:22.934628 kubelet[3405]: I0213 15:22:22.934599 3405 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571"} err="failed to get container status \"d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571\": rpc error: code = NotFound desc = an error occurred when try to find container \"d145c5af57ffa798e60aff0b73c680051e71307c39b9c184c55647f793e8e571\": not found" Feb 13 15:22:23.318541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31-rootfs.mount: Deactivated successfully. Feb 13 15:22:23.318641 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31-shm.mount: Deactivated successfully. Feb 13 15:22:23.318700 systemd[1]: var-lib-kubelet-pods-e4a415a5\x2da9c1\x2d41a7\x2d941c\x2d2ac929d78ff7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkpwkt.mount: Deactivated successfully. Feb 13 15:22:23.318750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6-rootfs.mount: Deactivated successfully. Feb 13 15:22:23.318801 systemd[1]: var-lib-kubelet-pods-94a4df6d\x2d5281\x2d4ccd\x2d962b\x2d6d5840557a94-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgkw8v.mount: Deactivated successfully. Feb 13 15:22:23.318852 systemd[1]: var-lib-kubelet-pods-94a4df6d\x2d5281\x2d4ccd\x2d962b\x2d6d5840557a94-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:22:23.318918 systemd[1]: var-lib-kubelet-pods-94a4df6d\x2d5281\x2d4ccd\x2d962b\x2d6d5840557a94-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:22:24.313812 sshd[5032]: Connection closed by 10.200.16.10 port 46550 Feb 13 15:22:24.314231 sshd-session[5030]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:24.318893 systemd[1]: sshd@22-10.200.20.11:22-10.200.16.10:46550.service: Deactivated successfully. Feb 13 15:22:24.321513 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:22:24.321903 systemd[1]: session-25.scope: Consumed 1.113s CPU time, 23.4M memory peak. Feb 13 15:22:24.323582 systemd-logind[1720]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:22:24.325257 systemd-logind[1720]: Removed session 25. Feb 13 15:22:24.364102 kubelet[3405]: I0213 15:22:24.364032 3405 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a4df6d-5281-4ccd-962b-6d5840557a94" path="/var/lib/kubelet/pods/94a4df6d-5281-4ccd-962b-6d5840557a94/volumes" Feb 13 15:22:24.365708 kubelet[3405]: I0213 15:22:24.365008 3405 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4a415a5-a9c1-41a7-941c-2ac929d78ff7" path="/var/lib/kubelet/pods/e4a415a5-a9c1-41a7-941c-2ac929d78ff7/volumes" Feb 13 15:22:24.367179 containerd[1794]: time="2025-02-13T15:22:24.367133251Z" level=info msg="StopPodSandbox for \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\"" Feb 13 15:22:24.367475 containerd[1794]: time="2025-02-13T15:22:24.367243251Z" level=info msg="TearDown network for sandbox \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\" successfully" Feb 13 15:22:24.367475 containerd[1794]: time="2025-02-13T15:22:24.367255211Z" level=info msg="StopPodSandbox for \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\" returns successfully" Feb 13 15:22:24.368913 containerd[1794]: time="2025-02-13T15:22:24.368076569Z" level=info msg="RemovePodSandbox for \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\"" Feb 13 15:22:24.368913 containerd[1794]: time="2025-02-13T15:22:24.368157689Z" level=info msg="Forcibly stopping sandbox \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\"" Feb 13 15:22:24.368913 containerd[1794]: time="2025-02-13T15:22:24.368220769Z" level=info msg="TearDown network for sandbox \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\" successfully" Feb 13 15:22:24.375560 containerd[1794]: time="2025-02-13T15:22:24.375510314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:24.375673 containerd[1794]: time="2025-02-13T15:22:24.375582954Z" level=info msg="RemovePodSandbox \"2773dbe6a781ef4c816e490dd9f8784f6c969c6c75ee1516dc3e74c4372fc5c6\" returns successfully" Feb 13 15:22:24.376208 containerd[1794]: time="2025-02-13T15:22:24.376173233Z" level=info msg="StopPodSandbox for \"268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31\"" Feb 13 15:22:24.376270 containerd[1794]: time="2025-02-13T15:22:24.376257032Z" level=info msg="TearDown network for sandbox \"268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31\" successfully" Feb 13 15:22:24.376294 containerd[1794]: time="2025-02-13T15:22:24.376268512Z" level=info msg="StopPodSandbox for \"268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31\" returns successfully" Feb 13 15:22:24.376850 containerd[1794]: time="2025-02-13T15:22:24.376586552Z" level=info msg="RemovePodSandbox for \"268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31\"" Feb 13 15:22:24.376850 containerd[1794]: time="2025-02-13T15:22:24.376619792Z" level=info msg="Forcibly stopping sandbox \"268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31\"" Feb 13 15:22:24.376850 containerd[1794]: time="2025-02-13T15:22:24.376698032Z" level=info msg="TearDown network for sandbox \"268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31\" successfully" Feb 13 15:22:24.385381 containerd[1794]: time="2025-02-13T15:22:24.385209174Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:24.385381 containerd[1794]: time="2025-02-13T15:22:24.385271894Z" level=info msg="RemovePodSandbox \"268126f636b55a40a70204592440f3d1ecac0d64bbb694ee101bf7934a289a31\" returns successfully" Feb 13 15:22:24.395210 systemd[1]: Started sshd@23-10.200.20.11:22-10.200.16.10:46552.service - OpenSSH per-connection server daemon (10.200.16.10:46552). Feb 13 15:22:24.477213 kubelet[3405]: E0213 15:22:24.477074 3405 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:22:24.830255 sshd[5193]: Accepted publickey for core from 10.200.16.10 port 46552 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:22:24.831544 sshd-session[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:24.836550 systemd-logind[1720]: New session 26 of user core. Feb 13 15:22:24.843340 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:22:25.361786 kubelet[3405]: E0213 15:22:25.361718 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-t6csz" podUID="28d07978-2cc0-4c29-8c2a-d97efb083ca5" Feb 13 15:22:25.921230 kubelet[3405]: E0213 15:22:25.919526 3405 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e4a415a5-a9c1-41a7-941c-2ac929d78ff7" containerName="cilium-operator" Feb 13 15:22:25.921230 kubelet[3405]: E0213 15:22:25.919558 3405 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94a4df6d-5281-4ccd-962b-6d5840557a94" containerName="clean-cilium-state" Feb 13 15:22:25.921230 kubelet[3405]: E0213 15:22:25.919566 3405 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94a4df6d-5281-4ccd-962b-6d5840557a94" containerName="mount-cgroup" Feb 13 15:22:25.921230 kubelet[3405]: E0213 15:22:25.919571 3405 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94a4df6d-5281-4ccd-962b-6d5840557a94" containerName="apply-sysctl-overwrites" Feb 13 15:22:25.921230 kubelet[3405]: E0213 15:22:25.919577 3405 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94a4df6d-5281-4ccd-962b-6d5840557a94" containerName="mount-bpf-fs" Feb 13 15:22:25.921230 kubelet[3405]: E0213 15:22:25.919583 3405 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94a4df6d-5281-4ccd-962b-6d5840557a94" containerName="cilium-agent" Feb 13 15:22:25.921230 kubelet[3405]: I0213 15:22:25.919605 3405 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a415a5-a9c1-41a7-941c-2ac929d78ff7" containerName="cilium-operator" Feb 13 15:22:25.921230 kubelet[3405]: I0213 15:22:25.919612 3405 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a4df6d-5281-4ccd-962b-6d5840557a94" containerName="cilium-agent" Feb 13 15:22:25.928038 systemd[1]: Created slice kubepods-burstable-podb640391a_42fb_44e1_99c4_f1d426a08719.slice - libcontainer container kubepods-burstable-podb640391a_42fb_44e1_99c4_f1d426a08719.slice. Feb 13 15:22:25.943540 sshd[5195]: Connection closed by 10.200.16.10 port 46552 Feb 13 15:22:25.947203 sshd-session[5193]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:25.950672 systemd[1]: sshd@23-10.200.20.11:22-10.200.16.10:46552.service: Deactivated successfully. Feb 13 15:22:25.954815 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:22:25.957905 systemd-logind[1720]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:22:25.960039 systemd-logind[1720]: Removed session 26. Feb 13 15:22:25.997749 kubelet[3405]: I0213 15:22:25.997355 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b640391a-42fb-44e1-99c4-f1d426a08719-lib-modules\") pod \"cilium-c9rcb\" (UID: \"b640391a-42fb-44e1-99c4-f1d426a08719\") " pod="kube-system/cilium-c9rcb" Feb 13 15:22:25.997749 kubelet[3405]: I0213 15:22:25.997397 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b640391a-42fb-44e1-99c4-f1d426a08719-cilium-ipsec-secrets\") pod \"cilium-c9rcb\" (UID: \"b640391a-42fb-44e1-99c4-f1d426a08719\") " pod="kube-system/cilium-c9rcb" Feb 13 15:22:25.997749 kubelet[3405]: I0213 15:22:25.997418 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b640391a-42fb-44e1-99c4-f1d426a08719-hostproc\") pod \"cilium-c9rcb\" (UID: \"b640391a-42fb-44e1-99c4-f1d426a08719\") " pod="kube-system/cilium-c9rcb" Feb 13 15:22:25.997749 kubelet[3405]: I0213 15:22:25.997435 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b640391a-42fb-44e1-99c4-f1d426a08719-clustermesh-secrets\") pod \"cilium-c9rcb\" (UID: \"b640391a-42fb-44e1-99c4-f1d426a08719\") " pod="kube-system/cilium-c9rcb" Feb 13 15:22:25.997749 kubelet[3405]: I0213 15:22:25.997453 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b640391a-42fb-44e1-99c4-f1d426a08719-bpf-maps\") pod \"cilium-c9rcb\" (UID: \"b640391a-42fb-44e1-99c4-f1d426a08719\") " pod="kube-system/cilium-c9rcb" Feb 13 15:22:25.997749 kubelet[3405]: I0213 15:22:25.997467 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b640391a-42fb-44e1-99c4-f1d426a08719-cilium-cgroup\") pod \"cilium-c9rcb\" (UID: \"b640391a-42fb-44e1-99c4-f1d426a08719\") " pod="kube-system/cilium-c9rcb" Feb 13 15:22:25.998000 kubelet[3405]: I0213 15:22:25.997481 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b640391a-42fb-44e1-99c4-f1d426a08719-etc-cni-netd\") pod \"cilium-c9rcb\" (UID: \"b640391a-42fb-44e1-99c4-f1d426a08719\") " pod="kube-system/cilium-c9rcb" Feb 13 15:22:25.998000 kubelet[3405]: I0213 15:22:25.997497 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b640391a-42fb-44e1-99c4-f1d426a08719-host-proc-sys-net\") pod \"cilium-c9rcb\" (UID: \"b640391a-42fb-44e1-99c4-f1d426a08719\") " pod="kube-system/cilium-c9rcb" Feb 13 15:22:25.998000 kubelet[3405]: I0213 15:22:25.997516 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b640391a-42fb-44e1-99c4-f1d426a08719-host-proc-sys-kernel\") pod \"cilium-c9rcb\" (UID: \"b640391a-42fb-44e1-99c4-f1d426a08719\") " pod="kube-system/cilium-c9rcb" Feb 13 15:22:25.998000 kubelet[3405]: I0213 15:22:25.997531 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b640391a-42fb-44e1-99c4-f1d426a08719-xtables-lock\") pod \"cilium-c9rcb\" (UID: \"b640391a-42fb-44e1-99c4-f1d426a08719\") " pod="kube-system/cilium-c9rcb" Feb 13 15:22:25.998000 kubelet[3405]: I0213 15:22:25.997547 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b640391a-42fb-44e1-99c4-f1d426a08719-hubble-tls\") pod \"cilium-c9rcb\" (UID: \"b640391a-42fb-44e1-99c4-f1d426a08719\") " pod="kube-system/cilium-c9rcb" Feb 13 15:22:25.998000 kubelet[3405]: I0213 15:22:25.997562 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b640391a-42fb-44e1-99c4-f1d426a08719-cilium-run\") pod \"cilium-c9rcb\" (UID: \"b640391a-42fb-44e1-99c4-f1d426a08719\") " pod="kube-system/cilium-c9rcb" Feb 13 15:22:25.998139 kubelet[3405]: I0213 15:22:25.997579 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b640391a-42fb-44e1-99c4-f1d426a08719-cni-path\") pod \"cilium-c9rcb\" (UID: \"b640391a-42fb-44e1-99c4-f1d426a08719\") " pod="kube-system/cilium-c9rcb" Feb 13 15:22:25.998139 kubelet[3405]: I0213 15:22:25.997642 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b640391a-42fb-44e1-99c4-f1d426a08719-cilium-config-path\") pod \"cilium-c9rcb\" (UID: \"b640391a-42fb-44e1-99c4-f1d426a08719\") " pod="kube-system/cilium-c9rcb" Feb 13 15:22:25.998139 kubelet[3405]: I0213 15:22:25.997665 3405 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2wx5\" (UniqueName: \"kubernetes.io/projected/b640391a-42fb-44e1-99c4-f1d426a08719-kube-api-access-h2wx5\") pod \"cilium-c9rcb\" (UID: \"b640391a-42fb-44e1-99c4-f1d426a08719\") " pod="kube-system/cilium-c9rcb" Feb 13 15:22:26.020084 systemd[1]: Started sshd@24-10.200.20.11:22-10.200.16.10:46554.service - OpenSSH per-connection server daemon (10.200.16.10:46554). Feb 13 15:22:26.233043 containerd[1794]: time="2025-02-13T15:22:26.232387938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c9rcb,Uid:b640391a-42fb-44e1-99c4-f1d426a08719,Namespace:kube-system,Attempt:0,}" Feb 13 15:22:26.287541 containerd[1794]: time="2025-02-13T15:22:26.287432529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:22:26.288120 containerd[1794]: time="2025-02-13T15:22:26.288057448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:22:26.288193 containerd[1794]: time="2025-02-13T15:22:26.288138048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:22:26.288420 containerd[1794]: time="2025-02-13T15:22:26.288375487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:22:26.304343 systemd[1]: Started cri-containerd-389d8d40dd90018c14b48d03a7cb6de882744c5309fa16c3ae427187c6883d87.scope - libcontainer container 389d8d40dd90018c14b48d03a7cb6de882744c5309fa16c3ae427187c6883d87. Feb 13 15:22:26.334022 containerd[1794]: time="2025-02-13T15:22:26.333979014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c9rcb,Uid:b640391a-42fb-44e1-99c4-f1d426a08719,Namespace:kube-system,Attempt:0,} returns sandbox id \"389d8d40dd90018c14b48d03a7cb6de882744c5309fa16c3ae427187c6883d87\"" Feb 13 15:22:26.337634 containerd[1794]: time="2025-02-13T15:22:26.337562608Z" level=info msg="CreateContainer within sandbox \"389d8d40dd90018c14b48d03a7cb6de882744c5309fa16c3ae427187c6883d87\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:22:26.390839 containerd[1794]: time="2025-02-13T15:22:26.390790043Z" level=info msg="CreateContainer within sandbox \"389d8d40dd90018c14b48d03a7cb6de882744c5309fa16c3ae427187c6883d87\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1e765ff678f946108b48ca949f8bc6a3a960bd66b17086f1dde9eea9c54840ee\"" Feb 13 15:22:26.391676 containerd[1794]: time="2025-02-13T15:22:26.391646961Z" level=info msg="StartContainer for \"1e765ff678f946108b48ca949f8bc6a3a960bd66b17086f1dde9eea9c54840ee\"" Feb 13 15:22:26.416321 systemd[1]: Started cri-containerd-1e765ff678f946108b48ca949f8bc6a3a960bd66b17086f1dde9eea9c54840ee.scope - libcontainer container 1e765ff678f946108b48ca949f8bc6a3a960bd66b17086f1dde9eea9c54840ee. Feb 13 15:22:26.446852 containerd[1794]: time="2025-02-13T15:22:26.444463436Z" level=info msg="StartContainer for \"1e765ff678f946108b48ca949f8bc6a3a960bd66b17086f1dde9eea9c54840ee\" returns successfully" Feb 13 15:22:26.450505 systemd[1]: cri-containerd-1e765ff678f946108b48ca949f8bc6a3a960bd66b17086f1dde9eea9c54840ee.scope: Deactivated successfully. Feb 13 15:22:26.454244 sshd[5206]: Accepted publickey for core from 10.200.16.10 port 46554 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:22:26.456483 sshd-session[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:26.465712 systemd-logind[1720]: New session 27 of user core. Feb 13 15:22:26.469282 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:22:26.517070 containerd[1794]: time="2025-02-13T15:22:26.516998400Z" level=info msg="shim disconnected" id=1e765ff678f946108b48ca949f8bc6a3a960bd66b17086f1dde9eea9c54840ee namespace=k8s.io Feb 13 15:22:26.517559 containerd[1794]: time="2025-02-13T15:22:26.517131559Z" level=warning msg="cleaning up after shim disconnected" id=1e765ff678f946108b48ca949f8bc6a3a960bd66b17086f1dde9eea9c54840ee namespace=k8s.io Feb 13 15:22:26.517559 containerd[1794]: time="2025-02-13T15:22:26.517143759Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:22:26.751109 sshd[5303]: Connection closed by 10.200.16.10 port 46554 Feb 13 15:22:26.751730 sshd-session[5206]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:26.755240 systemd[1]: sshd@24-10.200.20.11:22-10.200.16.10:46554.service: Deactivated successfully. Feb 13 15:22:26.757014 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:22:26.757825 systemd-logind[1720]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:22:26.759002 systemd-logind[1720]: Removed session 27. Feb 13 15:22:26.838345 systemd[1]: Started sshd@25-10.200.20.11:22-10.200.16.10:46562.service - OpenSSH per-connection server daemon (10.200.16.10:46562). Feb 13 15:22:26.882118 containerd[1794]: time="2025-02-13T15:22:26.882049652Z" level=info msg="CreateContainer within sandbox \"389d8d40dd90018c14b48d03a7cb6de882744c5309fa16c3ae427187c6883d87\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:22:26.922611 containerd[1794]: time="2025-02-13T15:22:26.922567187Z" level=info msg="CreateContainer within sandbox \"389d8d40dd90018c14b48d03a7cb6de882744c5309fa16c3ae427187c6883d87\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0bbfbcb2fe744896315ee2e14aac6839bae3b2c5241d057412eab5811074ffad\"" Feb 13 15:22:26.923183 containerd[1794]: time="2025-02-13T15:22:26.923068026Z" level=info msg="StartContainer for \"0bbfbcb2fe744896315ee2e14aac6839bae3b2c5241d057412eab5811074ffad\"" Feb 13 15:22:26.947285 systemd[1]: Started cri-containerd-0bbfbcb2fe744896315ee2e14aac6839bae3b2c5241d057412eab5811074ffad.scope - libcontainer container 0bbfbcb2fe744896315ee2e14aac6839bae3b2c5241d057412eab5811074ffad. Feb 13 15:22:26.974636 containerd[1794]: time="2025-02-13T15:22:26.974584303Z" level=info msg="StartContainer for \"0bbfbcb2fe744896315ee2e14aac6839bae3b2c5241d057412eab5811074ffad\" returns successfully" Feb 13 15:22:26.977102 systemd[1]: cri-containerd-0bbfbcb2fe744896315ee2e14aac6839bae3b2c5241d057412eab5811074ffad.scope: Deactivated successfully. Feb 13 15:22:27.023433 containerd[1794]: time="2025-02-13T15:22:27.023321745Z" level=info msg="shim disconnected" id=0bbfbcb2fe744896315ee2e14aac6839bae3b2c5241d057412eab5811074ffad namespace=k8s.io Feb 13 15:22:27.023433 containerd[1794]: time="2025-02-13T15:22:27.023401065Z" level=warning msg="cleaning up after shim disconnected" id=0bbfbcb2fe744896315ee2e14aac6839bae3b2c5241d057412eab5811074ffad namespace=k8s.io Feb 13 15:22:27.023433 containerd[1794]: time="2025-02-13T15:22:27.023410825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:22:27.296158 sshd[5322]: Accepted publickey for core from 10.200.16.10 port 46562 ssh2: RSA SHA256:0w1Drd4iRIF6O2cXsC7c8NcGVfQsefO7TKLCNo14104 Feb 13 15:22:27.298059 sshd-session[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:27.302386 systemd-logind[1720]: New session 28 of user core. Feb 13 15:22:27.308310 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:22:27.361314 kubelet[3405]: E0213 15:22:27.361255 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-t6csz" podUID="28d07978-2cc0-4c29-8c2a-d97efb083ca5" Feb 13 15:22:27.886111 containerd[1794]: time="2025-02-13T15:22:27.885769797Z" level=info msg="CreateContainer within sandbox \"389d8d40dd90018c14b48d03a7cb6de882744c5309fa16c3ae427187c6883d87\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:22:27.930463 containerd[1794]: time="2025-02-13T15:22:27.930405046Z" level=info msg="CreateContainer within sandbox \"389d8d40dd90018c14b48d03a7cb6de882744c5309fa16c3ae427187c6883d87\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dbd81fb1452069b060dbf8e26028566fbf066e9269f69933f30f16274ea74daf\"" Feb 13 15:22:27.932121 containerd[1794]: time="2025-02-13T15:22:27.932071203Z" level=info msg="StartContainer for \"dbd81fb1452069b060dbf8e26028566fbf066e9269f69933f30f16274ea74daf\"" Feb 13 15:22:27.960264 systemd[1]: Started cri-containerd-dbd81fb1452069b060dbf8e26028566fbf066e9269f69933f30f16274ea74daf.scope - libcontainer container dbd81fb1452069b060dbf8e26028566fbf066e9269f69933f30f16274ea74daf. Feb 13 15:22:27.990683 systemd[1]: cri-containerd-dbd81fb1452069b060dbf8e26028566fbf066e9269f69933f30f16274ea74daf.scope: Deactivated successfully. Feb 13 15:22:27.994800 containerd[1794]: time="2025-02-13T15:22:27.994566182Z" level=info msg="StartContainer for \"dbd81fb1452069b060dbf8e26028566fbf066e9269f69933f30f16274ea74daf\" returns successfully" Feb 13 15:22:28.030146 containerd[1794]: time="2025-02-13T15:22:28.029934965Z" level=info msg="shim disconnected" id=dbd81fb1452069b060dbf8e26028566fbf066e9269f69933f30f16274ea74daf namespace=k8s.io Feb 13 15:22:28.030146 containerd[1794]: time="2025-02-13T15:22:28.030010565Z" level=warning msg="cleaning up after shim disconnected" id=dbd81fb1452069b060dbf8e26028566fbf066e9269f69933f30f16274ea74daf namespace=k8s.io Feb 13 15:22:28.030146 containerd[1794]: time="2025-02-13T15:22:28.030019365Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:22:28.102858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbd81fb1452069b060dbf8e26028566fbf066e9269f69933f30f16274ea74daf-rootfs.mount: Deactivated successfully. Feb 13 15:22:28.616159 update_engine[1726]: I20250213 15:22:28.615453 1726 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 15:22:28.616159 update_engine[1726]: I20250213 15:22:28.615501 1726 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 15:22:28.616159 update_engine[1726]: I20250213 15:22:28.615664 1726 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 15:22:28.616159 update_engine[1726]: I20250213 15:22:28.616011 1726 omaha_request_params.cc:62] Current group set to alpha Feb 13 15:22:28.616159 update_engine[1726]: I20250213 15:22:28.616158 1726 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 15:22:28.616159 update_engine[1726]: I20250213 15:22:28.616170 1726 update_attempter.cc:643] Scheduling an action processor start. Feb 13 15:22:28.616600 update_engine[1726]: I20250213 15:22:28.616185 1726 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 15:22:28.616600 update_engine[1726]: I20250213 15:22:28.616216 1726 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 15:22:28.616600 update_engine[1726]: I20250213 15:22:28.616269 1726 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 15:22:28.616600 update_engine[1726]: I20250213 15:22:28.616276 1726 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Feb 13 15:22:28.616600 update_engine[1726]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Feb 13 15:22:28.616600 update_engine[1726]: <os version="Chateau" platform="CoreOS" sp="4230.0.1_aarch64"></os> Feb 13 15:22:28.616600 update_engine[1726]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4230.0.1" track="alpha" bootid="{4d15d152-770d-4050-97c8-966435a6d864}" oem="azure" oemversion="2.9.1.1-r3" alephversion="4230.0.1" machineid="e2988b96301f4c3ea5017bc9062c9210" machinealias="" lang="en-US" board="arm64-usr" hardware_class="" delta_okay="false" > Feb 13 15:22:28.616600 update_engine[1726]: <ping active="1"></ping> Feb 13 15:22:28.616600 update_engine[1726]: <updatecheck></updatecheck> Feb 13 15:22:28.616600 update_engine[1726]: <event eventtype="3" eventresult="2" previousversion="0.0.0.0"></event> Feb 13 15:22:28.616600 update_engine[1726]: </app> Feb 13 15:22:28.616600 update_engine[1726]: </request> Feb 13 15:22:28.616600 update_engine[1726]: I20250213 15:22:28.616282 1726 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:22:28.617545 locksmithd[1843]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 15:22:28.617737 update_engine[1726]: I20250213 15:22:28.617647 1726 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:22:28.618029 update_engine[1726]: I20250213 15:22:28.617995 1726 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:22:28.657285 update_engine[1726]: E20250213 15:22:28.657218 1726 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:22:28.657416 update_engine[1726]: I20250213 15:22:28.657332 1726 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 15:22:28.890707 containerd[1794]: time="2025-02-13T15:22:28.890509741Z" level=info msg="CreateContainer within sandbox \"389d8d40dd90018c14b48d03a7cb6de882744c5309fa16c3ae427187c6883d87\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:22:28.928571 containerd[1794]: time="2025-02-13T15:22:28.928281240Z" level=info msg="CreateContainer within sandbox \"389d8d40dd90018c14b48d03a7cb6de882744c5309fa16c3ae427187c6883d87\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0308b37e28e662a308519188421832979b104196bd34a0286722110cc3fa9c30\"" Feb 13 15:22:28.929503 containerd[1794]: time="2025-02-13T15:22:28.929428438Z" level=info msg="StartContainer for \"0308b37e28e662a308519188421832979b104196bd34a0286722110cc3fa9c30\"" Feb 13 15:22:28.961295 systemd[1]: Started cri-containerd-0308b37e28e662a308519188421832979b104196bd34a0286722110cc3fa9c30.scope - libcontainer container 0308b37e28e662a308519188421832979b104196bd34a0286722110cc3fa9c30. Feb 13 15:22:28.988127 systemd[1]: cri-containerd-0308b37e28e662a308519188421832979b104196bd34a0286722110cc3fa9c30.scope: Deactivated successfully. Feb 13 15:22:28.993834 containerd[1794]: time="2025-02-13T15:22:28.993785735Z" level=info msg="StartContainer for \"0308b37e28e662a308519188421832979b104196bd34a0286722110cc3fa9c30\" returns successfully" Feb 13 15:22:29.027677 containerd[1794]: time="2025-02-13T15:22:29.027611680Z" level=info msg="shim disconnected" id=0308b37e28e662a308519188421832979b104196bd34a0286722110cc3fa9c30 namespace=k8s.io Feb 13 15:22:29.027677 containerd[1794]: time="2025-02-13T15:22:29.027669480Z" level=warning msg="cleaning up after shim disconnected" id=0308b37e28e662a308519188421832979b104196bd34a0286722110cc3fa9c30 namespace=k8s.io Feb 13 15:22:29.027677 containerd[1794]: time="2025-02-13T15:22:29.027682320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:22:29.102928 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0308b37e28e662a308519188421832979b104196bd34a0286722110cc3fa9c30-rootfs.mount: Deactivated successfully. Feb 13 15:22:29.361760 kubelet[3405]: E0213 15:22:29.361689 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-t6csz" podUID="28d07978-2cc0-4c29-8c2a-d97efb083ca5" Feb 13 15:22:29.478689 kubelet[3405]: E0213 15:22:29.478557 3405 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:22:29.897645 containerd[1794]: time="2025-02-13T15:22:29.897506761Z" level=info msg="CreateContainer within sandbox \"389d8d40dd90018c14b48d03a7cb6de882744c5309fa16c3ae427187c6883d87\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:22:29.932583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3428391519.mount: Deactivated successfully. Feb 13 15:22:29.946532 containerd[1794]: time="2025-02-13T15:22:29.946466602Z" level=info msg="CreateContainer within sandbox \"389d8d40dd90018c14b48d03a7cb6de882744c5309fa16c3ae427187c6883d87\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a0b1a25dd2bef50aff310152beb887d7f917670de846c13bce8af6542cd245d5\"" Feb 13 15:22:29.947314 containerd[1794]: time="2025-02-13T15:22:29.947286960Z" level=info msg="StartContainer for \"a0b1a25dd2bef50aff310152beb887d7f917670de846c13bce8af6542cd245d5\"" Feb 13 15:22:29.975306 systemd[1]: Started cri-containerd-a0b1a25dd2bef50aff310152beb887d7f917670de846c13bce8af6542cd245d5.scope - libcontainer container a0b1a25dd2bef50aff310152beb887d7f917670de846c13bce8af6542cd245d5. Feb 13 15:22:30.006625 containerd[1794]: time="2025-02-13T15:22:30.006562905Z" level=info msg="StartContainer for \"a0b1a25dd2bef50aff310152beb887d7f917670de846c13bce8af6542cd245d5\" returns successfully" Feb 13 15:22:30.075954 kubelet[3405]: I0213 15:22:30.075895 3405 setters.go:600] "Node became not ready" node="ci-4230.0.1-a-0ecc5c528f" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:22:30Z","lastTransitionTime":"2025-02-13T15:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:22:30.603125 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 15:22:30.914994 kubelet[3405]: I0213 15:22:30.914501 3405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c9rcb" podStartSLOduration=5.914480712 podStartE2EDuration="5.914480712s" podCreationTimestamp="2025-02-13 15:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:22:30.914345312 +0000 UTC m=+246.661944878" watchObservedRunningTime="2025-02-13 15:22:30.914480712 +0000 UTC m=+246.662080278" Feb 13 15:22:31.361488 kubelet[3405]: E0213 15:22:31.361433 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-t6csz" podUID="28d07978-2cc0-4c29-8c2a-d97efb083ca5" Feb 13 15:22:33.360963 kubelet[3405]: E0213 15:22:33.360893 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-t6csz" podUID="28d07978-2cc0-4c29-8c2a-d97efb083ca5" Feb 13 15:22:33.383259 systemd-networkd[1352]: lxc_health: Link UP Feb 13 15:22:33.386254 systemd-networkd[1352]: lxc_health: Gained carrier Feb 13 15:22:35.221220 systemd-networkd[1352]: lxc_health: Gained IPv6LL Feb 13 15:22:36.026649 systemd[1]: run-containerd-runc-k8s.io-a0b1a25dd2bef50aff310152beb887d7f917670de846c13bce8af6542cd245d5-runc.bkOsiW.mount: Deactivated successfully. Feb 13 15:22:38.620286 update_engine[1726]: I20250213 15:22:38.620210 1726 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:22:38.620660 update_engine[1726]: I20250213 15:22:38.620455 1726 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:22:38.620758 update_engine[1726]: I20250213 15:22:38.620711 1726 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:22:38.647368 update_engine[1726]: E20250213 15:22:38.647300 1726 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:22:38.647522 update_engine[1726]: I20250213 15:22:38.647387 1726 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 15:22:40.435522 sshd[5384]: Connection closed by 10.200.16.10 port 46562 Feb 13 15:22:40.436244 sshd-session[5322]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:40.440264 systemd[1]: sshd@25-10.200.20.11:22-10.200.16.10:46562.service: Deactivated successfully. Feb 13 15:22:40.442327 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:22:40.442970 systemd-logind[1720]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:22:40.444636 systemd-logind[1720]: Removed session 28.