Mar 19 11:50:02.350347 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 19 11:50:02.350369 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Mar 19 10:15:40 -00 2025 Mar 19 11:50:02.350377 kernel: KASLR enabled Mar 19 11:50:02.350383 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 19 11:50:02.350390 kernel: printk: bootconsole [pl11] enabled Mar 19 11:50:02.350395 kernel: efi: EFI v2.7 by EDK II Mar 19 11:50:02.350402 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3eac7018 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Mar 19 11:50:02.350408 kernel: random: crng init done Mar 19 11:50:02.350414 kernel: secureboot: Secure boot disabled Mar 19 11:50:02.350419 kernel: ACPI: Early table checksum verification disabled Mar 19 11:50:02.350425 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 19 11:50:02.350431 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:50:02.350437 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:50:02.350444 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 19 11:50:02.350451 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:50:02.350457 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:50:02.350463 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:50:02.350471 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:50:02.350477 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:50:02.350483 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:50:02.350489 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 19 11:50:02.350495 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:50:02.350501 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 19 11:50:02.350507 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 19 11:50:02.350513 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 19 11:50:02.350519 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 19 11:50:02.350526 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 19 11:50:02.350532 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 19 11:50:02.350539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 19 11:50:02.350545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 19 11:50:02.350551 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 19 11:50:02.350557 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 19 11:50:02.350563 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 19 11:50:02.350569 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 19 11:50:02.350575 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 19 11:50:02.350581 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 19 11:50:02.350587 kernel: Zone ranges: Mar 19 11:50:02.350593 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 19 11:50:02.350598 kernel: DMA32 empty Mar 19 11:50:02.350605 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 19 11:50:02.350614 kernel: Movable zone start for each node Mar 19 11:50:02.350620 kernel: Early memory node ranges Mar 19 11:50:02.350627 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 19 11:50:02.350633 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Mar 19 11:50:02.350640 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Mar 19 11:50:02.350648 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Mar 19 11:50:02.350654 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 19 11:50:02.350660 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 19 11:50:02.350667 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 19 11:50:02.350673 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 19 11:50:02.350679 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 19 11:50:02.350686 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 19 11:50:02.350692 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 19 11:50:02.350699 kernel: psci: probing for conduit method from ACPI. Mar 19 11:50:02.350705 kernel: psci: PSCIv1.1 detected in firmware. Mar 19 11:50:02.350711 kernel: psci: Using standard PSCI v0.2 function IDs Mar 19 11:50:02.350718 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 19 11:50:02.350726 kernel: psci: SMC Calling Convention v1.4 Mar 19 11:50:02.350732 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 19 11:50:02.350738 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 19 11:50:02.350745 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 19 11:50:02.350751 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 19 11:50:02.350758 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 19 11:50:02.350764 kernel: Detected PIPT I-cache on CPU0 Mar 19 11:50:02.350770 kernel: CPU features: detected: GIC system register CPU interface Mar 19 11:50:02.350777 kernel: CPU features: detected: Hardware dirty bit management Mar 19 11:50:02.350783 kernel: CPU features: detected: Spectre-BHB Mar 19 11:50:02.350789 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 19 11:50:02.350797 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 19 11:50:02.350804 kernel: CPU features: detected: ARM erratum 1418040 Mar 19 11:50:02.350810 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 19 11:50:02.350816 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 19 11:50:02.350823 kernel: alternatives: applying boot alternatives Mar 19 11:50:02.350830 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=41cc5bdd62754423bbb4bbec0e0356d6a1ab5d0ac0f2396a30318f9fb189e7eb Mar 19 11:50:02.350837 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 19 11:50:02.350844 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 19 11:50:02.350850 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 19 11:50:02.350857 kernel: Fallback order for Node 0: 0 Mar 19 11:50:02.350863 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 19 11:50:02.350871 kernel: Policy zone: Normal Mar 19 11:50:02.350877 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 19 11:50:02.350883 kernel: software IO TLB: area num 2. Mar 19 11:50:02.350890 kernel: software IO TLB: mapped [mem 0x0000000036550000-0x000000003a550000] (64MB) Mar 19 11:50:02.350897 kernel: Memory: 3983656K/4194160K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 210504K reserved, 0K cma-reserved) Mar 19 11:50:02.350903 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 19 11:50:02.350909 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 19 11:50:02.350916 kernel: rcu: RCU event tracing is enabled. Mar 19 11:50:02.350923 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 19 11:50:02.350929 kernel: Trampoline variant of Tasks RCU enabled. Mar 19 11:50:02.350936 kernel: Tracing variant of Tasks RCU enabled. Mar 19 11:50:02.350944 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 19 11:50:02.350950 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 19 11:50:02.350957 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 19 11:50:02.350963 kernel: GICv3: 960 SPIs implemented Mar 19 11:50:02.350970 kernel: GICv3: 0 Extended SPIs implemented Mar 19 11:50:02.350976 kernel: Root IRQ handler: gic_handle_irq Mar 19 11:50:02.350982 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 19 11:50:02.350989 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 19 11:50:02.350995 kernel: ITS: No ITS available, not enabling LPIs Mar 19 11:50:02.351001 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 19 11:50:02.351008 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:50:02.351014 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 19 11:50:02.351022 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 19 11:50:02.351029 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 19 11:50:02.351035 kernel: Console: colour dummy device 80x25 Mar 19 11:50:02.351042 kernel: printk: console [tty1] enabled Mar 19 11:50:02.351049 kernel: ACPI: Core revision 20230628 Mar 19 11:50:02.351056 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 19 11:50:02.351062 kernel: pid_max: default: 32768 minimum: 301 Mar 19 11:50:02.351069 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 19 11:50:02.351075 kernel: landlock: Up and running. Mar 19 11:50:02.351083 kernel: SELinux: Initializing. Mar 19 11:50:02.351090 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:50:02.351096 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:50:02.351103 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 11:50:02.351110 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 11:50:02.351116 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Mar 19 11:50:02.351123 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Mar 19 11:50:02.351135 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 19 11:50:02.355195 kernel: rcu: Hierarchical SRCU implementation. Mar 19 11:50:02.355205 kernel: rcu: Max phase no-delay instances is 400. Mar 19 11:50:02.355213 kernel: Remapping and enabling EFI services. Mar 19 11:50:02.355220 kernel: smp: Bringing up secondary CPUs ... Mar 19 11:50:02.355234 kernel: Detected PIPT I-cache on CPU1 Mar 19 11:50:02.355241 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 19 11:50:02.355248 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:50:02.355255 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 19 11:50:02.355263 kernel: smp: Brought up 1 node, 2 CPUs Mar 19 11:50:02.355271 kernel: SMP: Total of 2 processors activated. Mar 19 11:50:02.355278 kernel: CPU features: detected: 32-bit EL0 Support Mar 19 11:50:02.355286 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 19 11:50:02.355293 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 19 11:50:02.355300 kernel: CPU features: detected: CRC32 instructions Mar 19 11:50:02.355307 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 19 11:50:02.355314 kernel: CPU features: detected: LSE atomic instructions Mar 19 11:50:02.355321 kernel: CPU features: detected: Privileged Access Never Mar 19 11:50:02.355328 kernel: CPU: All CPU(s) started at EL1 Mar 19 11:50:02.355337 kernel: alternatives: applying system-wide alternatives Mar 19 11:50:02.355344 kernel: devtmpfs: initialized Mar 19 11:50:02.355351 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 19 11:50:02.355358 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 19 11:50:02.355365 kernel: pinctrl core: initialized pinctrl subsystem Mar 19 11:50:02.355372 kernel: SMBIOS 3.1.0 present. Mar 19 11:50:02.355379 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 19 11:50:02.355386 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 19 11:50:02.355393 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 19 11:50:02.355402 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 19 11:50:02.355409 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 19 11:50:02.355416 kernel: audit: initializing netlink subsys (disabled) Mar 19 11:50:02.355423 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 19 11:50:02.355431 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 19 11:50:02.355438 kernel: cpuidle: using governor menu Mar 19 11:50:02.355445 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 19 11:50:02.355452 kernel: ASID allocator initialised with 32768 entries Mar 19 11:50:02.355459 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 19 11:50:02.355468 kernel: Serial: AMBA PL011 UART driver Mar 19 11:50:02.355475 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 19 11:50:02.355482 kernel: Modules: 0 pages in range for non-PLT usage Mar 19 11:50:02.355489 kernel: Modules: 509280 pages in range for PLT usage Mar 19 11:50:02.355496 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 19 11:50:02.355503 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 19 11:50:02.355510 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 19 11:50:02.355518 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 19 11:50:02.355525 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 19 11:50:02.355533 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 19 11:50:02.355540 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 19 11:50:02.355547 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 19 11:50:02.355554 kernel: ACPI: Added _OSI(Module Device) Mar 19 11:50:02.355562 kernel: ACPI: Added _OSI(Processor Device) Mar 19 11:50:02.355569 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 19 11:50:02.355576 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 19 11:50:02.355583 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 19 11:50:02.355590 kernel: ACPI: Interpreter enabled Mar 19 11:50:02.355599 kernel: ACPI: Using GIC for interrupt routing Mar 19 11:50:02.355606 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 19 11:50:02.355613 kernel: printk: console [ttyAMA0] enabled Mar 19 11:50:02.355620 kernel: printk: bootconsole [pl11] disabled Mar 19 11:50:02.355628 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 19 11:50:02.355634 kernel: iommu: Default domain type: Translated Mar 19 11:50:02.355641 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 19 11:50:02.355648 kernel: efivars: Registered efivars operations Mar 19 11:50:02.355656 kernel: vgaarb: loaded Mar 19 11:50:02.355664 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 19 11:50:02.355671 kernel: VFS: Disk quotas dquot_6.6.0 Mar 19 11:50:02.355679 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 19 11:50:02.355686 kernel: pnp: PnP ACPI init Mar 19 11:50:02.355692 kernel: pnp: PnP ACPI: found 0 devices Mar 19 11:50:02.355699 kernel: NET: Registered PF_INET protocol family Mar 19 11:50:02.355706 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 19 11:50:02.355714 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 19 11:50:02.355721 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 19 11:50:02.355730 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 19 11:50:02.355737 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 19 11:50:02.355744 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 19 11:50:02.355751 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:50:02.355758 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:50:02.355765 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 19 11:50:02.355772 kernel: PCI: CLS 0 bytes, default 64 Mar 19 11:50:02.355779 kernel: kvm [1]: HYP mode not available Mar 19 11:50:02.355786 kernel: Initialise system trusted keyrings Mar 19 11:50:02.355795 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 19 11:50:02.355802 kernel: Key type asymmetric registered Mar 19 11:50:02.355808 kernel: Asymmetric key parser 'x509' registered Mar 19 11:50:02.355815 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 19 11:50:02.355822 kernel: io scheduler mq-deadline registered Mar 19 11:50:02.355829 kernel: io scheduler kyber registered Mar 19 11:50:02.355836 kernel: io scheduler bfq registered Mar 19 11:50:02.355843 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 19 11:50:02.355850 kernel: thunder_xcv, ver 1.0 Mar 19 11:50:02.355859 kernel: thunder_bgx, ver 1.0 Mar 19 11:50:02.355866 kernel: nicpf, ver 1.0 Mar 19 11:50:02.355873 kernel: nicvf, ver 1.0 Mar 19 11:50:02.356019 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 19 11:50:02.356093 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-19T11:50:01 UTC (1742385001) Mar 19 11:50:02.356103 kernel: efifb: probing for efifb Mar 19 11:50:02.356110 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 19 11:50:02.356117 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 19 11:50:02.356127 kernel: efifb: scrolling: redraw Mar 19 11:50:02.356134 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 19 11:50:02.356165 kernel: Console: switching to colour frame buffer device 128x48 Mar 19 11:50:02.356173 kernel: fb0: EFI VGA frame buffer device Mar 19 11:50:02.356180 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 19 11:50:02.356187 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 19 11:50:02.356194 kernel: No ACPI PMU IRQ for CPU0 Mar 19 11:50:02.356201 kernel: No ACPI PMU IRQ for CPU1 Mar 19 11:50:02.356208 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Mar 19 11:50:02.356218 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 19 11:50:02.356225 kernel: watchdog: Hard watchdog permanently disabled Mar 19 11:50:02.356232 kernel: NET: Registered PF_INET6 protocol family Mar 19 11:50:02.356239 kernel: Segment Routing with IPv6 Mar 19 11:50:02.356246 kernel: In-situ OAM (IOAM) with IPv6 Mar 19 11:50:02.356253 kernel: NET: Registered PF_PACKET protocol family Mar 19 11:50:02.356259 kernel: Key type dns_resolver registered Mar 19 11:50:02.356266 kernel: registered taskstats version 1 Mar 19 11:50:02.356273 kernel: Loading compiled-in X.509 certificates Mar 19 11:50:02.356282 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 36392d496708ee63c4af5364493015d5256162ff' Mar 19 11:50:02.356289 kernel: Key type .fscrypt registered Mar 19 11:50:02.356296 kernel: Key type fscrypt-provisioning registered Mar 19 11:50:02.356303 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 19 11:50:02.356310 kernel: ima: Allocated hash algorithm: sha1 Mar 19 11:50:02.356318 kernel: ima: No architecture policies found Mar 19 11:50:02.356325 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 19 11:50:02.356332 kernel: clk: Disabling unused clocks Mar 19 11:50:02.356339 kernel: Freeing unused kernel memory: 38336K Mar 19 11:50:02.356347 kernel: Run /init as init process Mar 19 11:50:02.356354 kernel: with arguments: Mar 19 11:50:02.356361 kernel: /init Mar 19 11:50:02.356368 kernel: with environment: Mar 19 11:50:02.356375 kernel: HOME=/ Mar 19 11:50:02.356382 kernel: TERM=linux Mar 19 11:50:02.356388 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 19 11:50:02.356396 systemd[1]: Successfully made /usr/ read-only. Mar 19 11:50:02.356408 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:50:02.356416 systemd[1]: Detected virtualization microsoft. Mar 19 11:50:02.356423 systemd[1]: Detected architecture arm64. Mar 19 11:50:02.356431 systemd[1]: Running in initrd. Mar 19 11:50:02.356438 systemd[1]: No hostname configured, using default hostname. Mar 19 11:50:02.356445 systemd[1]: Hostname set to . Mar 19 11:50:02.356453 systemd[1]: Initializing machine ID from random generator. Mar 19 11:50:02.356460 systemd[1]: Queued start job for default target initrd.target. Mar 19 11:50:02.356469 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:50:02.356477 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:50:02.356485 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 19 11:50:02.356493 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:50:02.356500 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 19 11:50:02.356508 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 19 11:50:02.356517 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 19 11:50:02.356527 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 19 11:50:02.356534 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:50:02.356542 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:50:02.356549 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:50:02.356557 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:50:02.356564 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:50:02.356572 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:50:02.356579 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:50:02.356589 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:50:02.356596 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 19 11:50:02.356604 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 19 11:50:02.356611 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:50:02.356619 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:50:02.356626 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:50:02.356634 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:50:02.356641 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 19 11:50:02.356649 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:50:02.356659 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 19 11:50:02.356666 systemd[1]: Starting systemd-fsck-usr.service... Mar 19 11:50:02.356673 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:50:02.356702 systemd-journald[218]: Collecting audit messages is disabled. Mar 19 11:50:02.356723 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:50:02.356731 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:50:02.356740 systemd-journald[218]: Journal started Mar 19 11:50:02.356759 systemd-journald[218]: Runtime Journal (/run/log/journal/789fb2fd4317432da0a7ea25a09ffdf7) is 8M, max 78.5M, 70.5M free. Mar 19 11:50:02.362903 systemd-modules-load[220]: Inserted module 'overlay' Mar 19 11:50:02.393161 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 19 11:50:02.393215 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:50:02.405232 kernel: Bridge firewalling registered Mar 19 11:50:02.405373 systemd-modules-load[220]: Inserted module 'br_netfilter' Mar 19 11:50:02.406239 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 19 11:50:02.420238 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:50:02.433668 systemd[1]: Finished systemd-fsck-usr.service. Mar 19 11:50:02.449342 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:50:02.457714 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:50:02.482400 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:50:02.496947 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:50:02.510676 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:50:02.527366 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:50:02.542343 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:50:02.552620 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:50:02.565844 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:50:02.580157 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:50:02.608681 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 19 11:50:02.622358 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:50:02.637864 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:50:02.664998 dracut-cmdline[251]: dracut-dracut-053 Mar 19 11:50:02.664998 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=41cc5bdd62754423bbb4bbec0e0356d6a1ab5d0ac0f2396a30318f9fb189e7eb Mar 19 11:50:02.650775 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:50:02.719859 systemd-resolved[256]: Positive Trust Anchors: Mar 19 11:50:02.719879 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:50:02.719910 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:50:02.722009 systemd-resolved[256]: Defaulting to hostname 'linux'. Mar 19 11:50:02.724758 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:50:02.732629 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:50:02.843161 kernel: SCSI subsystem initialized Mar 19 11:50:02.851164 kernel: Loading iSCSI transport class v2.0-870. Mar 19 11:50:02.861168 kernel: iscsi: registered transport (tcp) Mar 19 11:50:02.878695 kernel: iscsi: registered transport (qla4xxx) Mar 19 11:50:02.878718 kernel: QLogic iSCSI HBA Driver Mar 19 11:50:02.915199 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 19 11:50:02.931363 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 19 11:50:02.963996 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 19 11:50:02.964066 kernel: device-mapper: uevent: version 1.0.3 Mar 19 11:50:02.970724 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 19 11:50:03.019168 kernel: raid6: neonx8 gen() 15745 MB/s Mar 19 11:50:03.039153 kernel: raid6: neonx4 gen() 15823 MB/s Mar 19 11:50:03.059154 kernel: raid6: neonx2 gen() 13318 MB/s Mar 19 11:50:03.080156 kernel: raid6: neonx1 gen() 10548 MB/s Mar 19 11:50:03.100154 kernel: raid6: int64x8 gen() 6791 MB/s Mar 19 11:50:03.120153 kernel: raid6: int64x4 gen() 7347 MB/s Mar 19 11:50:03.141156 kernel: raid6: int64x2 gen() 6111 MB/s Mar 19 11:50:03.164733 kernel: raid6: int64x1 gen() 5055 MB/s Mar 19 11:50:03.164759 kernel: raid6: using algorithm neonx4 gen() 15823 MB/s Mar 19 11:50:03.188932 kernel: raid6: .... xor() 12468 MB/s, rmw enabled Mar 19 11:50:03.188943 kernel: raid6: using neon recovery algorithm Mar 19 11:50:03.198150 kernel: xor: measuring software checksum speed Mar 19 11:50:03.205261 kernel: 8regs : 20272 MB/sec Mar 19 11:50:03.205273 kernel: 32regs : 21658 MB/sec Mar 19 11:50:03.208940 kernel: arm64_neon : 27927 MB/sec Mar 19 11:50:03.213776 kernel: xor: using function: arm64_neon (27927 MB/sec) Mar 19 11:50:03.265174 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 19 11:50:03.275448 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:50:03.291293 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:50:03.316946 systemd-udevd[439]: Using default interface naming scheme 'v255'. Mar 19 11:50:03.323149 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:50:03.340370 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 19 11:50:03.363422 dracut-pre-trigger[446]: rd.md=0: removing MD RAID activation Mar 19 11:50:03.392755 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:50:03.411347 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:50:03.452722 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:50:03.473727 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 19 11:50:03.512703 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 19 11:50:03.530196 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:50:03.551688 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:50:03.566650 kernel: hv_vmbus: Vmbus version:5.3 Mar 19 11:50:03.573049 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:50:03.588346 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 19 11:50:03.626587 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 19 11:50:03.626610 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 19 11:50:03.625135 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:50:03.645423 kernel: hv_vmbus: registering driver hv_storvsc Mar 19 11:50:03.645456 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 19 11:50:03.645466 kernel: scsi host1: storvsc_host_t Mar 19 11:50:03.663070 kernel: scsi host0: storvsc_host_t Mar 19 11:50:03.663304 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 19 11:50:03.663316 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 19 11:50:03.671716 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:50:03.678129 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:50:03.710739 kernel: hv_vmbus: registering driver hid_hyperv Mar 19 11:50:03.710768 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 19 11:50:03.710802 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 19 11:50:03.704355 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:50:03.746238 kernel: PTP clock support registered Mar 19 11:50:03.746260 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 19 11:50:03.750584 kernel: hv_vmbus: registering driver hv_netvsc Mar 19 11:50:03.730646 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:50:03.730868 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:50:03.756916 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:50:03.881393 kernel: hv_utils: Registering HyperV Utility Driver Mar 19 11:50:03.881417 kernel: hv_vmbus: registering driver hv_utils Mar 19 11:50:03.881435 kernel: hv_utils: Shutdown IC version 3.2 Mar 19 11:50:03.881445 kernel: hv_utils: Heartbeat IC version 3.0 Mar 19 11:50:03.881453 kernel: hv_utils: TimeSync IC version 4.0 Mar 19 11:50:03.786949 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:50:03.872891 systemd-resolved[256]: Clock change detected. Flushing caches. Mar 19 11:50:03.916626 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 19 11:50:03.931338 kernel: hv_netvsc 0022487b-78ab-0022-487b-78ab0022487b eth0: VF slot 1 added Mar 19 11:50:03.931469 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 19 11:50:03.931479 kernel: hv_vmbus: registering driver hv_pci Mar 19 11:50:03.931489 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 19 11:50:03.905516 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:50:03.947309 kernel: hv_pci 8ec0ace4-8a9a-4072-9553-0552eaca8b1f: PCI VMBus probing: Using version 0x10004 Mar 19 11:50:04.078105 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 19 11:50:04.078248 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 19 11:50:04.078335 kernel: hv_pci 8ec0ace4-8a9a-4072-9553-0552eaca8b1f: PCI host bridge to bus 8a9a:00 Mar 19 11:50:04.078424 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 19 11:50:04.078545 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 19 11:50:04.078641 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 19 11:50:04.078949 kernel: pci_bus 8a9a:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 19 11:50:04.079426 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 11:50:04.079440 kernel: pci_bus 8a9a:00: No busn resource found for root bus, will use [bus 00-ff] Mar 19 11:50:04.079918 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 19 11:50:04.080017 kernel: pci 8a9a:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 19 11:50:04.080129 kernel: pci 8a9a:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 19 11:50:04.080222 kernel: pci 8a9a:00:02.0: enabling Extended Tags Mar 19 11:50:04.080312 kernel: pci 8a9a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8a9a:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 19 11:50:04.080401 kernel: pci_bus 8a9a:00: busn_res: [bus 00-ff] end is updated to 00 Mar 19 11:50:04.080478 kernel: pci 8a9a:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 19 11:50:03.947949 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:50:04.014260 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:50:04.115667 kernel: mlx5_core 8a9a:00:02.0: enabling device (0000 -> 0002) Mar 19 11:50:04.333497 kernel: mlx5_core 8a9a:00:02.0: firmware version: 16.30.1284 Mar 19 11:50:04.333641 kernel: hv_netvsc 0022487b-78ab-0022-487b-78ab0022487b eth0: VF registering: eth1 Mar 19 11:50:04.333769 kernel: mlx5_core 8a9a:00:02.0 eth1: joined to eth0 Mar 19 11:50:04.333877 kernel: mlx5_core 8a9a:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 19 11:50:04.341755 kernel: mlx5_core 8a9a:00:02.0 enP35482s1: renamed from eth1 Mar 19 11:50:04.500740 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (494) Mar 19 11:50:04.513470 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 19 11:50:04.529997 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 19 11:50:04.631199 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 19 11:50:04.710166 kernel: BTRFS: device fsid 7c80927c-98c3-4e81-a933-b7f5e1234bd2 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (486) Mar 19 11:50:04.725587 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 19 11:50:04.732988 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 19 11:50:04.761934 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 19 11:50:04.782735 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 11:50:05.796203 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 11:50:05.796259 disk-uuid[602]: The operation has completed successfully. Mar 19 11:50:05.847473 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 19 11:50:05.847565 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 19 11:50:05.906887 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 19 11:50:05.921549 sh[689]: Success Mar 19 11:50:05.949760 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 19 11:50:06.144309 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 19 11:50:06.168613 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 19 11:50:06.180003 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 19 11:50:06.208505 kernel: BTRFS info (device dm-0): first mount of filesystem 7c80927c-98c3-4e81-a933-b7f5e1234bd2 Mar 19 11:50:06.208536 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:50:06.215817 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 19 11:50:06.221467 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 19 11:50:06.225956 kernel: BTRFS info (device dm-0): using free space tree Mar 19 11:50:06.697994 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 19 11:50:06.704460 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 19 11:50:06.727944 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 19 11:50:06.734881 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 19 11:50:06.779412 kernel: BTRFS info (device sda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:50:06.779464 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:50:06.784852 kernel: BTRFS info (device sda6): using free space tree Mar 19 11:50:06.805754 kernel: BTRFS info (device sda6): auto enabling async discard Mar 19 11:50:06.814216 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 19 11:50:06.831822 kernel: BTRFS info (device sda6): last unmount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:50:06.838800 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 19 11:50:06.853994 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 19 11:50:06.897050 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:50:06.923866 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:50:06.955423 systemd-networkd[874]: lo: Link UP Mar 19 11:50:06.958756 systemd-networkd[874]: lo: Gained carrier Mar 19 11:50:06.960992 systemd-networkd[874]: Enumeration completed Mar 19 11:50:06.963536 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:50:06.969940 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:50:06.969945 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:50:06.970651 systemd[1]: Reached target network.target - Network. Mar 19 11:50:07.062740 kernel: mlx5_core 8a9a:00:02.0 enP35482s1: Link up Mar 19 11:50:07.116765 kernel: hv_netvsc 0022487b-78ab-0022-487b-78ab0022487b eth0: Data path switched to VF: enP35482s1 Mar 19 11:50:07.117362 systemd-networkd[874]: enP35482s1: Link UP Mar 19 11:50:07.117494 systemd-networkd[874]: eth0: Link UP Mar 19 11:50:07.117688 systemd-networkd[874]: eth0: Gained carrier Mar 19 11:50:07.117697 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:50:07.149648 systemd-networkd[874]: enP35482s1: Gained carrier Mar 19 11:50:07.161774 systemd-networkd[874]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 19 11:50:07.805102 ignition[825]: Ignition 2.20.0 Mar 19 11:50:07.805113 ignition[825]: Stage: fetch-offline Mar 19 11:50:07.809847 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:50:07.805151 ignition[825]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:50:07.805159 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:50:07.805251 ignition[825]: parsed url from cmdline: "" Mar 19 11:50:07.805254 ignition[825]: no config URL provided Mar 19 11:50:07.805259 ignition[825]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 11:50:07.805266 ignition[825]: no config at "/usr/lib/ignition/user.ign" Mar 19 11:50:07.843938 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 19 11:50:07.805271 ignition[825]: failed to fetch config: resource requires networking Mar 19 11:50:07.805471 ignition[825]: Ignition finished successfully Mar 19 11:50:07.873659 ignition[885]: Ignition 2.20.0 Mar 19 11:50:07.873665 ignition[885]: Stage: fetch Mar 19 11:50:07.873860 ignition[885]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:50:07.873869 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:50:07.873962 ignition[885]: parsed url from cmdline: "" Mar 19 11:50:07.873965 ignition[885]: no config URL provided Mar 19 11:50:07.873970 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 11:50:07.873976 ignition[885]: no config at "/usr/lib/ignition/user.ign" Mar 19 11:50:07.874001 ignition[885]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 19 11:50:07.982797 ignition[885]: GET result: OK Mar 19 11:50:07.982816 ignition[885]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Mar 19 11:50:08.007944 ignition[885]: opening config device: "/dev/sr0" Mar 19 11:50:08.008317 ignition[885]: getting drive status for "/dev/sr0" Mar 19 11:50:08.008377 ignition[885]: drive status: OK Mar 19 11:50:08.008422 ignition[885]: mounting config device Mar 19 11:50:08.008429 ignition[885]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure795127370" Mar 19 11:50:08.030102 ignition[885]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure795127370" Mar 19 11:50:08.037659 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2025/03/20 00:00 (1000) Mar 19 11:50:08.030113 ignition[885]: checking for config drive Mar 19 11:50:08.037675 ignition[885]: reading config Mar 19 11:50:08.038081 ignition[885]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure795127370" Mar 19 11:50:08.038285 ignition[885]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure795127370" Mar 19 11:50:08.038662 systemd[1]: tmp-ignition\x2dazure795127370.mount: Deactivated successfully. Mar 19 11:50:08.038301 ignition[885]: config has been read from custom data Mar 19 11:50:08.043345 unknown[885]: fetched base config from "system" Mar 19 11:50:08.038340 ignition[885]: parsing config with SHA512: 0493f3e10a883835f79b8e2eac7f4ec0915e25fd286a2265cf93773e1b792aa76734b66d7fda71ee90893c8dc34ba228d99a12f4fd8e3b5f6fb3b5121e37b69d Mar 19 11:50:08.043352 unknown[885]: fetched base config from "system" Mar 19 11:50:08.043727 ignition[885]: fetch: fetch complete Mar 19 11:50:08.043357 unknown[885]: fetched user config from "azure" Mar 19 11:50:08.043731 ignition[885]: fetch: fetch passed Mar 19 11:50:08.046602 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 19 11:50:08.043780 ignition[885]: Ignition finished successfully Mar 19 11:50:08.074886 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 19 11:50:08.103838 ignition[893]: Ignition 2.20.0 Mar 19 11:50:08.110222 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 19 11:50:08.103844 ignition[893]: Stage: kargs Mar 19 11:50:08.104011 ignition[893]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:50:08.137965 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 19 11:50:08.104020 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:50:08.104976 ignition[893]: kargs: kargs passed Mar 19 11:50:08.165673 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 19 11:50:08.105021 ignition[893]: Ignition finished successfully Mar 19 11:50:08.172395 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 19 11:50:08.157374 ignition[900]: Ignition 2.20.0 Mar 19 11:50:08.183373 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 19 11:50:08.157385 ignition[900]: Stage: disks Mar 19 11:50:08.196285 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:50:08.157571 ignition[900]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:50:08.206849 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:50:08.157580 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:50:08.219775 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:50:08.158726 ignition[900]: disks: disks passed Mar 19 11:50:08.238957 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 19 11:50:08.158778 ignition[900]: Ignition finished successfully Mar 19 11:50:08.338944 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 19 11:50:08.348224 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 19 11:50:08.365913 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 19 11:50:08.430402 kernel: EXT4-fs (sda9): mounted filesystem 45bb9a4a-80dc-4ce4-9ca9-c4944d8ff0e6 r/w with ordered data mode. Quota mode: none. Mar 19 11:50:08.425201 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 19 11:50:08.432252 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 19 11:50:08.476799 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:50:08.484834 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 19 11:50:08.518050 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (919) Mar 19 11:50:08.518121 kernel: BTRFS info (device sda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:50:08.524945 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:50:08.530650 kernel: BTRFS info (device sda6): using free space tree Mar 19 11:50:08.525953 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 19 11:50:08.538510 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 19 11:50:08.538568 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:50:08.554835 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 19 11:50:08.590792 kernel: BTRFS info (device sda6): auto enabling async discard Mar 19 11:50:08.591941 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 19 11:50:08.604534 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:50:08.781971 systemd-networkd[874]: eth0: Gained IPv6LL Mar 19 11:50:09.100833 systemd-networkd[874]: enP35482s1: Gained IPv6LL Mar 19 11:50:09.184588 coreos-metadata[921]: Mar 19 11:50:09.184 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 19 11:50:09.197091 coreos-metadata[921]: Mar 19 11:50:09.196 INFO Fetch successful Mar 19 11:50:09.197091 coreos-metadata[921]: Mar 19 11:50:09.196 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 19 11:50:09.214305 coreos-metadata[921]: Mar 19 11:50:09.214 INFO Fetch successful Mar 19 11:50:09.224206 coreos-metadata[921]: Mar 19 11:50:09.224 INFO wrote hostname ci-4230.1.0-a-361b280840 to /sysroot/etc/hostname Mar 19 11:50:09.234032 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 19 11:50:09.667367 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Mar 19 11:50:09.729847 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Mar 19 11:50:09.736576 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Mar 19 11:50:09.743383 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Mar 19 11:50:10.877226 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 19 11:50:10.895951 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 19 11:50:10.904229 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 19 11:50:10.928429 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 19 11:50:10.937726 kernel: BTRFS info (device sda6): last unmount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:50:10.954664 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 19 11:50:10.971054 ignition[1039]: INFO : Ignition 2.20.0 Mar 19 11:50:10.971054 ignition[1039]: INFO : Stage: mount Mar 19 11:50:10.971054 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:50:10.971054 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:50:10.998040 ignition[1039]: INFO : mount: mount passed Mar 19 11:50:10.998040 ignition[1039]: INFO : Ignition finished successfully Mar 19 11:50:10.976506 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 19 11:50:11.007935 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 19 11:50:11.025998 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:50:11.057599 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1049) Mar 19 11:50:11.057660 kernel: BTRFS info (device sda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:50:11.069721 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:50:11.069757 kernel: BTRFS info (device sda6): using free space tree Mar 19 11:50:11.075728 kernel: BTRFS info (device sda6): auto enabling async discard Mar 19 11:50:11.077590 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:50:11.110048 ignition[1067]: INFO : Ignition 2.20.0 Mar 19 11:50:11.110048 ignition[1067]: INFO : Stage: files Mar 19 11:50:11.117923 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:50:11.117923 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:50:11.117923 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Mar 19 11:50:11.137069 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 19 11:50:11.137069 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 19 11:50:11.205478 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 19 11:50:11.212850 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 19 11:50:11.212850 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 19 11:50:11.205947 unknown[1067]: wrote ssh authorized keys file for user: core Mar 19 11:50:11.249724 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 19 11:50:11.249724 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 19 11:50:11.329536 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 19 11:50:11.556473 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 19 11:50:11.556473 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 19 11:50:11.556473 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 19 11:50:12.026609 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 19 11:50:12.095725 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 19 11:50:12.106319 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 19 11:50:12.106319 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 19 11:50:12.106319 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:50:12.106319 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:50:12.106319 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:50:12.106319 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:50:12.106319 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:50:12.106319 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:50:12.106319 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:50:12.106319 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:50:12.106319 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:50:12.106319 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:50:12.106319 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:50:12.106319 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 19 11:50:12.487368 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 19 11:50:12.640217 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:50:12.640217 ignition[1067]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 19 11:50:12.687663 ignition[1067]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:50:12.699031 ignition[1067]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:50:12.699031 ignition[1067]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 19 11:50:12.699031 ignition[1067]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 19 11:50:12.699031 ignition[1067]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 19 11:50:12.699031 ignition[1067]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:50:12.699031 ignition[1067]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:50:12.699031 ignition[1067]: INFO : files: files passed Mar 19 11:50:12.699031 ignition[1067]: INFO : Ignition finished successfully Mar 19 11:50:12.699302 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 19 11:50:12.739924 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 19 11:50:12.754851 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 19 11:50:12.784295 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 19 11:50:12.784399 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 19 11:50:12.823083 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:50:12.823083 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:50:12.846951 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:50:12.829016 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:50:12.838524 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 19 11:50:12.874935 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 19 11:50:12.900264 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 19 11:50:12.905635 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 19 11:50:12.914073 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 19 11:50:12.927227 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 19 11:50:12.938012 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 19 11:50:12.957989 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 19 11:50:12.980779 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:50:13.003020 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 19 11:50:13.024540 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 19 11:50:13.026733 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 19 11:50:13.038372 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:50:13.051959 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:50:13.065757 systemd[1]: Stopped target timers.target - Timer Units. Mar 19 11:50:13.077873 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 19 11:50:13.077952 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:50:13.095067 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 19 11:50:13.100925 systemd[1]: Stopped target basic.target - Basic System. Mar 19 11:50:13.113160 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 19 11:50:13.125185 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:50:13.136327 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 19 11:50:13.149305 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 19 11:50:13.162012 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:50:13.176002 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 19 11:50:13.187475 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 19 11:50:13.200699 systemd[1]: Stopped target swap.target - Swaps. Mar 19 11:50:13.211192 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 19 11:50:13.211284 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:50:13.227356 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:50:13.239954 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:50:13.252681 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 19 11:50:13.258458 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:50:13.265902 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 19 11:50:13.265980 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 19 11:50:13.283244 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 19 11:50:13.283295 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:50:13.298403 systemd[1]: ignition-files.service: Deactivated successfully. Mar 19 11:50:13.298448 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 19 11:50:13.309114 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 19 11:50:13.309168 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 19 11:50:13.381897 ignition[1119]: INFO : Ignition 2.20.0 Mar 19 11:50:13.381897 ignition[1119]: INFO : Stage: umount Mar 19 11:50:13.381897 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:50:13.381897 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:50:13.381897 ignition[1119]: INFO : umount: umount passed Mar 19 11:50:13.381897 ignition[1119]: INFO : Ignition finished successfully Mar 19 11:50:13.340893 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 19 11:50:13.356438 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 19 11:50:13.356528 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:50:13.379832 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 19 11:50:13.398417 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 19 11:50:13.398512 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:50:13.410240 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 19 11:50:13.410295 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:50:13.430218 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 19 11:50:13.430307 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 19 11:50:13.442533 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 19 11:50:13.442591 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 19 11:50:13.453604 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 19 11:50:13.453660 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 19 11:50:13.464522 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 19 11:50:13.464570 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 19 11:50:13.475684 systemd[1]: Stopped target network.target - Network. Mar 19 11:50:13.486849 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 19 11:50:13.486915 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:50:13.500520 systemd[1]: Stopped target paths.target - Path Units. Mar 19 11:50:13.511223 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 19 11:50:13.517263 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:50:13.525133 systemd[1]: Stopped target slices.target - Slice Units. Mar 19 11:50:13.535798 systemd[1]: Stopped target sockets.target - Socket Units. Mar 19 11:50:13.548364 systemd[1]: iscsid.socket: Deactivated successfully. Mar 19 11:50:13.548409 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:50:13.559733 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 19 11:50:13.559766 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:50:13.571175 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 19 11:50:13.571235 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 19 11:50:13.582247 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 19 11:50:13.582293 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 19 11:50:13.593467 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 19 11:50:13.604072 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 19 11:50:13.623819 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 19 11:50:13.623945 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 19 11:50:13.641557 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 19 11:50:13.641802 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 19 11:50:13.856974 kernel: hv_netvsc 0022487b-78ab-0022-487b-78ab0022487b eth0: Data path switched from VF: enP35482s1 Mar 19 11:50:13.641916 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 19 11:50:13.655759 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 19 11:50:13.656480 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 19 11:50:13.656543 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:50:13.688915 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 19 11:50:13.698957 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 19 11:50:13.699035 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:50:13.711847 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:50:13.711909 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:50:13.728083 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 19 11:50:13.728134 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 19 11:50:13.734516 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 19 11:50:13.734560 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:50:13.753167 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:50:13.765652 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 19 11:50:13.765747 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:50:13.803548 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 19 11:50:13.803703 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:50:13.816918 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 19 11:50:13.816965 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 19 11:50:13.828707 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 19 11:50:13.828753 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:50:13.850784 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 19 11:50:13.850856 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:50:13.869960 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 19 11:50:13.870026 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 19 11:50:13.876259 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:50:13.876309 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:50:13.931986 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 19 11:50:13.951103 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 19 11:50:13.951214 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:50:13.970123 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:50:13.970220 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:50:13.983879 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 19 11:50:13.983989 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 19 11:50:13.984031 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:50:13.984591 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 19 11:50:13.984746 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 19 11:50:13.994974 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 19 11:50:13.995063 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 19 11:50:14.007773 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 19 11:50:14.007860 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 19 11:50:14.022656 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 19 11:50:14.034481 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 19 11:50:14.034579 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 19 11:50:14.234008 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Mar 19 11:50:14.069009 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 19 11:50:14.093489 systemd[1]: Switching root. Mar 19 11:50:14.243296 systemd-journald[218]: Journal stopped Mar 19 11:50:19.635009 kernel: SELinux: policy capability network_peer_controls=1 Mar 19 11:50:19.635037 kernel: SELinux: policy capability open_perms=1 Mar 19 11:50:19.635047 kernel: SELinux: policy capability extended_socket_class=1 Mar 19 11:50:19.635055 kernel: SELinux: policy capability always_check_network=0 Mar 19 11:50:19.635065 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 19 11:50:19.635073 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 19 11:50:19.635081 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 19 11:50:19.635089 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 19 11:50:19.635097 kernel: audit: type=1403 audit(1742385015.563:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 19 11:50:19.635106 systemd[1]: Successfully loaded SELinux policy in 217.629ms. Mar 19 11:50:19.635117 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.396ms. Mar 19 11:50:19.635127 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:50:19.635136 systemd[1]: Detected virtualization microsoft. Mar 19 11:50:19.635144 systemd[1]: Detected architecture arm64. Mar 19 11:50:19.635153 systemd[1]: Detected first boot. Mar 19 11:50:19.635163 systemd[1]: Hostname set to . Mar 19 11:50:19.635172 systemd[1]: Initializing machine ID from random generator. Mar 19 11:50:19.635181 zram_generator::config[1161]: No configuration found. Mar 19 11:50:19.635190 kernel: NET: Registered PF_VSOCK protocol family Mar 19 11:50:19.635198 systemd[1]: Populated /etc with preset unit settings. Mar 19 11:50:19.635208 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 19 11:50:19.635216 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 19 11:50:19.635227 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 19 11:50:19.635235 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 19 11:50:19.635244 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 19 11:50:19.635254 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 19 11:50:19.635263 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 19 11:50:19.635272 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 19 11:50:19.635280 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 19 11:50:19.635291 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 19 11:50:19.635302 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 19 11:50:19.635311 systemd[1]: Created slice user.slice - User and Session Slice. Mar 19 11:50:19.635320 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:50:19.635329 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:50:19.635338 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 19 11:50:19.635347 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 19 11:50:19.635356 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 19 11:50:19.635366 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:50:19.635375 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 19 11:50:19.635384 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:50:19.635396 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 19 11:50:19.635405 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 19 11:50:19.635414 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 19 11:50:19.635423 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 19 11:50:19.635432 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:50:19.635443 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:50:19.635452 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:50:19.635461 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:50:19.635470 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 19 11:50:19.635479 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 19 11:50:19.635488 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 19 11:50:19.635500 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:50:19.635509 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:50:19.635518 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:50:19.635527 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 19 11:50:19.635536 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 19 11:50:19.635545 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 19 11:50:19.635555 systemd[1]: Mounting media.mount - External Media Directory... Mar 19 11:50:19.635565 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 19 11:50:19.635575 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 19 11:50:19.635584 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 19 11:50:19.635594 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 19 11:50:19.635603 systemd[1]: Reached target machines.target - Containers. Mar 19 11:50:19.635612 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 19 11:50:19.635621 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:50:19.635631 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:50:19.635642 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 19 11:50:19.635652 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:50:19.635661 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:50:19.635670 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:50:19.635679 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 19 11:50:19.635688 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:50:19.635699 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 19 11:50:19.635708 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 19 11:50:19.637783 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 19 11:50:19.637796 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 19 11:50:19.637806 systemd[1]: Stopped systemd-fsck-usr.service. Mar 19 11:50:19.637817 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:50:19.637826 kernel: fuse: init (API version 7.39) Mar 19 11:50:19.637835 kernel: loop: module loaded Mar 19 11:50:19.637843 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:50:19.637852 kernel: ACPI: bus type drm_connector registered Mar 19 11:50:19.637861 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:50:19.637904 systemd-journald[1265]: Collecting audit messages is disabled. Mar 19 11:50:19.637926 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 19 11:50:19.637936 systemd-journald[1265]: Journal started Mar 19 11:50:19.637958 systemd-journald[1265]: Runtime Journal (/run/log/journal/624007c9d6804934b253ac38e33d2506) is 8M, max 78.5M, 70.5M free. Mar 19 11:50:18.545896 systemd[1]: Queued start job for default target multi-user.target. Mar 19 11:50:18.556602 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 19 11:50:18.556990 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 19 11:50:18.557325 systemd[1]: systemd-journald.service: Consumed 3.434s CPU time. Mar 19 11:50:19.668591 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 19 11:50:19.701395 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 19 11:50:19.720009 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:50:19.726758 systemd[1]: verity-setup.service: Deactivated successfully. Mar 19 11:50:19.726835 systemd[1]: Stopped verity-setup.service. Mar 19 11:50:19.751570 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:50:19.752394 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 19 11:50:19.759106 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 19 11:50:19.766270 systemd[1]: Mounted media.mount - External Media Directory. Mar 19 11:50:19.772541 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 19 11:50:19.780335 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 19 11:50:19.788827 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 19 11:50:19.796997 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 19 11:50:19.805722 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:50:19.814574 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 19 11:50:19.814759 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 19 11:50:19.823269 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:50:19.823422 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:50:19.831917 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:50:19.832073 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:50:19.839680 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:50:19.839847 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:50:19.849138 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 19 11:50:19.849299 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 19 11:50:19.857361 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:50:19.857513 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:50:19.865466 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:50:19.873396 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 19 11:50:19.882496 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 19 11:50:19.891527 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 19 11:50:19.899952 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:50:19.917182 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 19 11:50:19.930831 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 19 11:50:19.941868 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 19 11:50:19.949128 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 19 11:50:19.949170 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:50:19.957230 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 19 11:50:19.966394 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 19 11:50:19.974882 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 19 11:50:19.981436 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:50:19.997858 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 19 11:50:20.006222 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 19 11:50:20.014211 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:50:20.015927 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 19 11:50:20.023141 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:50:20.024611 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:50:20.032873 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 19 11:50:20.053557 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 19 11:50:20.073901 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 19 11:50:20.086005 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 19 11:50:20.087032 systemd-journald[1265]: Time spent on flushing to /var/log/journal/624007c9d6804934b253ac38e33d2506 is 12.922ms for 925 entries. Mar 19 11:50:20.087032 systemd-journald[1265]: System Journal (/var/log/journal/624007c9d6804934b253ac38e33d2506) is 8M, max 2.6G, 2.6G free. Mar 19 11:50:20.185120 systemd-journald[1265]: Received client request to flush runtime journal. Mar 19 11:50:20.185178 kernel: loop0: detected capacity change from 0 to 113512 Mar 19 11:50:20.106802 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 19 11:50:20.119143 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 19 11:50:20.131579 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 19 11:50:20.146200 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:50:20.160200 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 19 11:50:20.175506 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 19 11:50:20.183839 udevadm[1305]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 19 11:50:20.187006 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 19 11:50:20.238215 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 19 11:50:20.238973 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 19 11:50:20.485807 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 19 11:50:20.504910 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:50:20.577196 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Mar 19 11:50:20.577535 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Mar 19 11:50:20.581911 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:50:20.613765 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 19 11:50:20.680744 kernel: loop1: detected capacity change from 0 to 28720 Mar 19 11:50:21.130669 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 19 11:50:21.143866 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:50:21.167963 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Mar 19 11:50:21.230744 kernel: loop2: detected capacity change from 0 to 189592 Mar 19 11:50:21.284738 kernel: loop3: detected capacity change from 0 to 123192 Mar 19 11:50:21.468338 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:50:21.485934 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:50:21.541973 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 19 11:50:21.548277 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 19 11:50:21.588895 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 19 11:50:21.655757 kernel: hv_vmbus: registering driver hv_balloon Mar 19 11:50:21.661259 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 19 11:50:21.667487 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 19 11:50:21.676820 kernel: hv_vmbus: registering driver hyperv_fb Mar 19 11:50:21.677487 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 19 11:50:21.687316 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 19 11:50:21.690478 kernel: Console: switching to colour dummy device 80x25 Mar 19 11:50:21.701115 kernel: Console: switching to colour frame buffer device 128x48 Mar 19 11:50:21.718994 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:50:21.737733 kernel: mousedev: PS/2 mouse device common for all mice Mar 19 11:50:21.747924 systemd-networkd[1342]: lo: Link UP Mar 19 11:50:21.748238 systemd-networkd[1342]: lo: Gained carrier Mar 19 11:50:21.750490 systemd-networkd[1342]: Enumeration completed Mar 19 11:50:21.750690 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:50:21.751034 systemd-networkd[1342]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:50:21.751116 systemd-networkd[1342]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:50:21.759310 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:50:21.759792 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:50:21.767306 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:50:21.774978 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 19 11:50:21.792831 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 19 11:50:21.808871 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1348) Mar 19 11:50:21.809813 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:50:21.822792 kernel: mlx5_core 8a9a:00:02.0 enP35482s1: Link up Mar 19 11:50:21.851730 kernel: hv_netvsc 0022487b-78ab-0022-487b-78ab0022487b eth0: Data path switched to VF: enP35482s1 Mar 19 11:50:21.854078 systemd-networkd[1342]: enP35482s1: Link UP Mar 19 11:50:21.854169 systemd-networkd[1342]: eth0: Link UP Mar 19 11:50:21.854172 systemd-networkd[1342]: eth0: Gained carrier Mar 19 11:50:21.854187 systemd-networkd[1342]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:50:21.860606 systemd-networkd[1342]: enP35482s1: Gained carrier Mar 19 11:50:21.863202 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 19 11:50:21.874527 systemd-networkd[1342]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 19 11:50:21.916730 kernel: loop4: detected capacity change from 0 to 113512 Mar 19 11:50:21.919934 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 19 11:50:21.939749 kernel: loop5: detected capacity change from 0 to 28720 Mar 19 11:50:21.940949 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 19 11:50:21.958984 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 19 11:50:21.963731 kernel: loop6: detected capacity change from 0 to 189592 Mar 19 11:50:21.976861 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 19 11:50:21.980177 kernel: loop7: detected capacity change from 0 to 123192 Mar 19 11:50:21.986107 (sd-merge)[1443]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 19 11:50:21.986934 (sd-merge)[1443]: Merged extensions into '/usr'. Mar 19 11:50:21.991188 systemd[1]: Reload requested from client PID 1301 ('systemd-sysext') (unit systemd-sysext.service)... Mar 19 11:50:21.991209 systemd[1]: Reloading... Mar 19 11:50:22.063759 zram_generator::config[1482]: No configuration found. Mar 19 11:50:22.078275 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:50:22.223788 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:50:22.315771 systemd[1]: Reloading finished in 324 ms. Mar 19 11:50:22.334671 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 19 11:50:22.342202 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:50:22.348924 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 19 11:50:22.356144 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 19 11:50:22.368397 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:50:22.378081 systemd[1]: Starting ensure-sysext.service... Mar 19 11:50:22.384899 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 19 11:50:22.395203 lvm[1542]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:50:22.395900 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:50:22.420817 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 19 11:50:22.427819 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 19 11:50:22.428375 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 19 11:50:22.429199 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 19 11:50:22.429522 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Mar 19 11:50:22.429641 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Mar 19 11:50:22.430574 systemd[1]: Reload requested from client PID 1541 ('systemctl') (unit ensure-sysext.service)... Mar 19 11:50:22.430592 systemd[1]: Reloading... Mar 19 11:50:22.461152 systemd-tmpfiles[1543]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:50:22.461424 systemd-tmpfiles[1543]: Skipping /boot Mar 19 11:50:22.476356 systemd-tmpfiles[1543]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:50:22.476480 systemd-tmpfiles[1543]: Skipping /boot Mar 19 11:50:22.531740 zram_generator::config[1591]: No configuration found. Mar 19 11:50:22.622587 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:50:22.719243 systemd[1]: Reloading finished in 288 ms. Mar 19 11:50:22.742003 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:50:22.762024 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:50:22.787013 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 19 11:50:22.796004 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 19 11:50:22.806090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:50:22.814606 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 19 11:50:22.824639 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:50:22.832975 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:50:22.841463 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:50:22.851000 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:50:22.861830 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:50:22.861957 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:50:22.865440 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:50:22.867749 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:50:22.885821 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:50:22.887797 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:50:22.898061 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:50:22.898210 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:50:22.906329 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 19 11:50:22.917470 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 19 11:50:22.933115 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:50:22.943689 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:50:22.951639 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:50:22.960574 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:50:22.969080 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:50:22.975348 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:50:22.975486 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:50:22.975635 systemd[1]: Reached target time-set.target - System Time Set. Mar 19 11:50:22.982838 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:50:22.983023 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:50:22.990472 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:50:22.990635 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:50:22.998413 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:50:22.998814 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:50:23.012369 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:50:23.012608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:50:23.018671 systemd-resolved[1642]: Positive Trust Anchors: Mar 19 11:50:23.019157 systemd-resolved[1642]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:50:23.019207 systemd-resolved[1642]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:50:23.022257 augenrules[1670]: No rules Mar 19 11:50:23.024760 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:50:23.025104 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:50:23.033691 systemd[1]: Finished ensure-sysext.service. Mar 19 11:50:23.048032 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:50:23.048140 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:50:23.070344 systemd-resolved[1642]: Using system hostname 'ci-4230.1.0-a-361b280840'. Mar 19 11:50:23.071925 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:50:23.078476 systemd[1]: Reached target network.target - Network. Mar 19 11:50:23.083469 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:50:23.180796 systemd-networkd[1342]: eth0: Gained IPv6LL Mar 19 11:50:23.183421 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 19 11:50:23.190795 systemd[1]: Reached target network-online.target - Network is Online. Mar 19 11:50:23.820865 systemd-networkd[1342]: enP35482s1: Gained IPv6LL Mar 19 11:50:23.995201 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 19 11:50:24.003184 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 19 11:50:26.271805 ldconfig[1296]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 19 11:50:26.280329 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 19 11:50:26.291889 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 19 11:50:26.300519 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 19 11:50:26.307416 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:50:26.313926 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 19 11:50:26.320830 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 19 11:50:26.328274 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 19 11:50:26.334379 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 19 11:50:26.341898 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 19 11:50:26.348992 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 19 11:50:26.349034 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:50:26.354072 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:50:26.371818 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 19 11:50:26.380061 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 19 11:50:26.387482 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 19 11:50:26.394922 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 19 11:50:26.402050 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 19 11:50:26.415414 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 19 11:50:26.421581 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 19 11:50:26.428809 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 19 11:50:26.434798 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:50:26.440101 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:50:26.445337 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:50:26.445373 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:50:26.454821 systemd[1]: Starting chronyd.service - NTP client/server... Mar 19 11:50:26.463880 systemd[1]: Starting containerd.service - containerd container runtime... Mar 19 11:50:26.476927 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 19 11:50:26.489845 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 19 11:50:26.496399 (chronyd)[1688]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 19 11:50:26.498753 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 19 11:50:26.505902 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 19 11:50:26.508245 jq[1695]: false Mar 19 11:50:26.515527 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 19 11:50:26.515575 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 19 11:50:26.516959 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 19 11:50:26.522991 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 19 11:50:26.524119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:50:26.534369 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 19 11:50:26.534861 KVP[1697]: KVP starting; pid is:1697 Mar 19 11:50:26.538168 chronyd[1702]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 19 11:50:26.544313 KVP[1697]: KVP LIC Version: 3.1 Mar 19 11:50:26.544463 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 19 11:50:26.544726 kernel: hv_utils: KVP IC version 4.0 Mar 19 11:50:26.551360 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 19 11:50:26.561347 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 19 11:50:26.574913 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 19 11:50:26.588989 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 19 11:50:26.598197 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 19 11:50:26.598865 chronyd[1702]: Timezone right/UTC failed leap second check, ignoring Mar 19 11:50:26.599123 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 19 11:50:26.599046 chronyd[1702]: Loaded seccomp filter (level 2) Mar 19 11:50:26.601994 systemd[1]: Starting update-engine.service - Update Engine... Mar 19 11:50:26.609862 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 19 11:50:26.616790 dbus-daemon[1694]: [system] SELinux support is enabled Mar 19 11:50:26.620372 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 19 11:50:26.626870 extend-filesystems[1696]: Found loop4 Mar 19 11:50:26.626870 extend-filesystems[1696]: Found loop5 Mar 19 11:50:26.626870 extend-filesystems[1696]: Found loop6 Mar 19 11:50:26.626870 extend-filesystems[1696]: Found loop7 Mar 19 11:50:26.626870 extend-filesystems[1696]: Found sda Mar 19 11:50:26.626870 extend-filesystems[1696]: Found sda1 Mar 19 11:50:26.626870 extend-filesystems[1696]: Found sda2 Mar 19 11:50:26.626870 extend-filesystems[1696]: Found sda3 Mar 19 11:50:26.626870 extend-filesystems[1696]: Found usr Mar 19 11:50:26.626870 extend-filesystems[1696]: Found sda4 Mar 19 11:50:26.626870 extend-filesystems[1696]: Found sda6 Mar 19 11:50:26.626870 extend-filesystems[1696]: Found sda7 Mar 19 11:50:26.626870 extend-filesystems[1696]: Found sda9 Mar 19 11:50:26.626870 extend-filesystems[1696]: Checking size of /dev/sda9 Mar 19 11:50:26.638413 systemd[1]: Started chronyd.service - NTP client/server. Mar 19 11:50:26.963939 coreos-metadata[1690]: Mar 19 11:50:26.804 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 19 11:50:26.963939 coreos-metadata[1690]: Mar 19 11:50:26.810 INFO Fetch successful Mar 19 11:50:26.963939 coreos-metadata[1690]: Mar 19 11:50:26.812 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 19 11:50:26.963939 coreos-metadata[1690]: Mar 19 11:50:26.818 INFO Fetch successful Mar 19 11:50:26.963939 coreos-metadata[1690]: Mar 19 11:50:26.818 INFO Fetching http://168.63.129.16/machine/71e26e30-f0ff-47b2-8b16-e972c0c6c641/1add9606%2D3256%2D4f6a%2D85c1%2Ddaf30c6a9a8c.%5Fci%2D4230.1.0%2Da%2D361b280840?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 19 11:50:26.963939 coreos-metadata[1690]: Mar 19 11:50:26.863 INFO Fetch successful Mar 19 11:50:26.963939 coreos-metadata[1690]: Mar 19 11:50:26.863 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 19 11:50:26.963939 coreos-metadata[1690]: Mar 19 11:50:26.883 INFO Fetch successful Mar 19 11:50:26.992159 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1768) Mar 19 11:50:26.992186 extend-filesystems[1696]: Old size kept for /dev/sda9 Mar 19 11:50:26.992186 extend-filesystems[1696]: Found sr0 Mar 19 11:50:27.031863 update_engine[1718]: I20250319 11:50:26.719152 1718 main.cc:92] Flatcar Update Engine starting Mar 19 11:50:27.031863 update_engine[1718]: I20250319 11:50:26.733366 1718 update_check_scheduler.cc:74] Next update check in 2m8s Mar 19 11:50:26.660064 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 19 11:50:27.033000 jq[1719]: true Mar 19 11:50:26.660271 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 19 11:50:26.664297 systemd[1]: motdgen.service: Deactivated successfully. Mar 19 11:50:26.666087 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 19 11:50:27.035675 tar[1728]: linux-arm64/helm Mar 19 11:50:26.685460 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 19 11:50:27.036397 jq[1736]: true Mar 19 11:50:26.685663 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 19 11:50:26.709262 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 19 11:50:26.709462 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 19 11:50:27.042587 bash[1776]: Updated "/home/core/.ssh/authorized_keys" Mar 19 11:50:26.734308 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 19 11:50:26.764304 (ntainerd)[1740]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 19 11:50:26.780517 systemd[1]: Started update-engine.service - Update Engine. Mar 19 11:50:26.784624 systemd-logind[1713]: New seat seat0. Mar 19 11:50:26.791749 systemd-logind[1713]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 19 11:50:26.795265 systemd[1]: Started systemd-logind.service - User Login Management. Mar 19 11:50:26.830989 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 19 11:50:26.831172 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 19 11:50:26.870979 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 19 11:50:26.871179 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 19 11:50:26.982248 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 19 11:50:27.005666 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 19 11:50:27.030821 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 19 11:50:27.053776 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 19 11:50:27.063499 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 19 11:50:27.443204 containerd[1740]: time="2025-03-19T11:50:27.442989820Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 19 11:50:27.460824 locksmithd[1785]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 19 11:50:27.516652 containerd[1740]: time="2025-03-19T11:50:27.516331780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:50:27.529735 containerd[1740]: time="2025-03-19T11:50:27.527814460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:50:27.529735 containerd[1740]: time="2025-03-19T11:50:27.527856060Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 19 11:50:27.529735 containerd[1740]: time="2025-03-19T11:50:27.527874340Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 19 11:50:27.529735 containerd[1740]: time="2025-03-19T11:50:27.528032540Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 19 11:50:27.529735 containerd[1740]: time="2025-03-19T11:50:27.528049380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 19 11:50:27.529735 containerd[1740]: time="2025-03-19T11:50:27.528115740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:50:27.529735 containerd[1740]: time="2025-03-19T11:50:27.528128420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:50:27.529735 containerd[1740]: time="2025-03-19T11:50:27.528327780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:50:27.529735 containerd[1740]: time="2025-03-19T11:50:27.528343580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 19 11:50:27.529735 containerd[1740]: time="2025-03-19T11:50:27.528355740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:50:27.529735 containerd[1740]: time="2025-03-19T11:50:27.528364580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 19 11:50:27.530021 containerd[1740]: time="2025-03-19T11:50:27.528446020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:50:27.530021 containerd[1740]: time="2025-03-19T11:50:27.528634180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:50:27.530021 containerd[1740]: time="2025-03-19T11:50:27.528783580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:50:27.530021 containerd[1740]: time="2025-03-19T11:50:27.528799660Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 19 11:50:27.530021 containerd[1740]: time="2025-03-19T11:50:27.528892900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 19 11:50:27.530021 containerd[1740]: time="2025-03-19T11:50:27.528933380Z" level=info msg="metadata content store policy set" policy=shared Mar 19 11:50:27.549374 containerd[1740]: time="2025-03-19T11:50:27.549333780Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 19 11:50:27.549564 containerd[1740]: time="2025-03-19T11:50:27.549550060Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 19 11:50:27.549777 containerd[1740]: time="2025-03-19T11:50:27.549760060Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 19 11:50:27.549857 containerd[1740]: time="2025-03-19T11:50:27.549845180Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 19 11:50:27.549933 containerd[1740]: time="2025-03-19T11:50:27.549919900Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 19 11:50:27.550186 containerd[1740]: time="2025-03-19T11:50:27.550168620Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 19 11:50:27.553557 containerd[1740]: time="2025-03-19T11:50:27.552831300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 19 11:50:27.553557 containerd[1740]: time="2025-03-19T11:50:27.552990900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 19 11:50:27.553557 containerd[1740]: time="2025-03-19T11:50:27.553022140Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 19 11:50:27.553557 containerd[1740]: time="2025-03-19T11:50:27.553038500Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 19 11:50:27.553557 containerd[1740]: time="2025-03-19T11:50:27.553051700Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 19 11:50:27.553557 containerd[1740]: time="2025-03-19T11:50:27.553065060Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 19 11:50:27.553557 containerd[1740]: time="2025-03-19T11:50:27.553076660Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 19 11:50:27.553557 containerd[1740]: time="2025-03-19T11:50:27.553100420Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 19 11:50:27.553557 containerd[1740]: time="2025-03-19T11:50:27.553116700Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 19 11:50:27.553557 containerd[1740]: time="2025-03-19T11:50:27.553131340Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 19 11:50:27.553557 containerd[1740]: time="2025-03-19T11:50:27.553142900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 19 11:50:27.553557 containerd[1740]: time="2025-03-19T11:50:27.553154620Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 19 11:50:27.553557 containerd[1740]: time="2025-03-19T11:50:27.553182500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 19 11:50:27.553557 containerd[1740]: time="2025-03-19T11:50:27.553197060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 19 11:50:27.553874 containerd[1740]: time="2025-03-19T11:50:27.553216460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 19 11:50:27.553874 containerd[1740]: time="2025-03-19T11:50:27.553229140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 19 11:50:27.553874 containerd[1740]: time="2025-03-19T11:50:27.553248860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 19 11:50:27.553874 containerd[1740]: time="2025-03-19T11:50:27.553262820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 19 11:50:27.553874 containerd[1740]: time="2025-03-19T11:50:27.553275380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 19 11:50:27.553874 containerd[1740]: time="2025-03-19T11:50:27.553287740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 19 11:50:27.553874 containerd[1740]: time="2025-03-19T11:50:27.553300540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 19 11:50:27.553874 containerd[1740]: time="2025-03-19T11:50:27.553321780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 19 11:50:27.553874 containerd[1740]: time="2025-03-19T11:50:27.553335180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 19 11:50:27.553874 containerd[1740]: time="2025-03-19T11:50:27.553346660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 19 11:50:27.553874 containerd[1740]: time="2025-03-19T11:50:27.553358620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 19 11:50:27.553874 containerd[1740]: time="2025-03-19T11:50:27.553373020Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 19 11:50:27.553874 containerd[1740]: time="2025-03-19T11:50:27.553409180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 19 11:50:27.553874 containerd[1740]: time="2025-03-19T11:50:27.553423140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 19 11:50:27.553874 containerd[1740]: time="2025-03-19T11:50:27.553434740Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 19 11:50:27.554318 containerd[1740]: time="2025-03-19T11:50:27.554289300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 19 11:50:27.558046 containerd[1740]: time="2025-03-19T11:50:27.558008900Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 19 11:50:27.558196 containerd[1740]: time="2025-03-19T11:50:27.558178700Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 19 11:50:27.558291 containerd[1740]: time="2025-03-19T11:50:27.558275060Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 19 11:50:27.558358 containerd[1740]: time="2025-03-19T11:50:27.558344740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 19 11:50:27.558430 containerd[1740]: time="2025-03-19T11:50:27.558417540Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 19 11:50:27.558740 containerd[1740]: time="2025-03-19T11:50:27.558478660Z" level=info msg="NRI interface is disabled by configuration." Mar 19 11:50:27.558740 containerd[1740]: time="2025-03-19T11:50:27.558512300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 19 11:50:27.562373 containerd[1740]: time="2025-03-19T11:50:27.559021500Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 19 11:50:27.562373 containerd[1740]: time="2025-03-19T11:50:27.561905940Z" level=info msg="Connect containerd service" Mar 19 11:50:27.562373 containerd[1740]: time="2025-03-19T11:50:27.561971940Z" level=info msg="using legacy CRI server" Mar 19 11:50:27.562373 containerd[1740]: time="2025-03-19T11:50:27.561981900Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 19 11:50:27.562373 containerd[1740]: time="2025-03-19T11:50:27.562111620Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 19 11:50:27.563737 containerd[1740]: time="2025-03-19T11:50:27.563214220Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:50:27.563737 containerd[1740]: time="2025-03-19T11:50:27.563340020Z" level=info msg="Start subscribing containerd event" Mar 19 11:50:27.563737 containerd[1740]: time="2025-03-19T11:50:27.563382700Z" level=info msg="Start recovering state" Mar 19 11:50:27.563737 containerd[1740]: time="2025-03-19T11:50:27.563459140Z" level=info msg="Start event monitor" Mar 19 11:50:27.563737 containerd[1740]: time="2025-03-19T11:50:27.563470740Z" level=info msg="Start snapshots syncer" Mar 19 11:50:27.563737 containerd[1740]: time="2025-03-19T11:50:27.563480460Z" level=info msg="Start cni network conf syncer for default" Mar 19 11:50:27.563737 containerd[1740]: time="2025-03-19T11:50:27.563487100Z" level=info msg="Start streaming server" Mar 19 11:50:27.566733 containerd[1740]: time="2025-03-19T11:50:27.565897740Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 19 11:50:27.566733 containerd[1740]: time="2025-03-19T11:50:27.565945220Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 19 11:50:27.567876 systemd[1]: Started containerd.service - containerd container runtime. Mar 19 11:50:27.574811 containerd[1740]: time="2025-03-19T11:50:27.574616420Z" level=info msg="containerd successfully booted in 0.137730s" Mar 19 11:50:27.730549 tar[1728]: linux-arm64/LICENSE Mar 19 11:50:27.730622 tar[1728]: linux-arm64/README.md Mar 19 11:50:27.742773 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 19 11:50:27.756132 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:50:27.756373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:50:28.116640 kubelet[1856]: E0319 11:50:28.116532 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:50:28.119056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:50:28.119199 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:50:28.119652 systemd[1]: kubelet.service: Consumed 663ms CPU time, 234.8M memory peak. Mar 19 11:50:28.310667 sshd_keygen[1717]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 19 11:50:28.328703 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 19 11:50:28.341004 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 19 11:50:28.347916 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 19 11:50:28.354219 systemd[1]: issuegen.service: Deactivated successfully. Mar 19 11:50:28.354413 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 19 11:50:28.369098 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 19 11:50:28.386172 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 19 11:50:28.400824 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 19 11:50:28.413014 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 19 11:50:28.419835 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 19 11:50:28.426689 systemd[1]: Reached target getty.target - Login Prompts. Mar 19 11:50:28.433826 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 19 11:50:28.440115 systemd[1]: Startup finished in 690ms (kernel) + 13.524s (initrd) + 13.092s (userspace) = 27.306s. Mar 19 11:50:28.766219 login[1889]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 19 11:50:28.766975 login[1887]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:50:28.803179 systemd-logind[1713]: New session 1 of user core. Mar 19 11:50:28.803911 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 19 11:50:28.809948 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 19 11:50:28.821651 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 19 11:50:28.829268 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 19 11:50:28.842443 (systemd)[1896]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 19 11:50:28.844599 systemd-logind[1713]: New session c1 of user core. Mar 19 11:50:29.041799 systemd[1896]: Queued start job for default target default.target. Mar 19 11:50:29.049624 systemd[1896]: Created slice app.slice - User Application Slice. Mar 19 11:50:29.049648 systemd[1896]: Reached target paths.target - Paths. Mar 19 11:50:29.049691 systemd[1896]: Reached target timers.target - Timers. Mar 19 11:50:29.050914 systemd[1896]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 19 11:50:29.060176 systemd[1896]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 19 11:50:29.060239 systemd[1896]: Reached target sockets.target - Sockets. Mar 19 11:50:29.060284 systemd[1896]: Reached target basic.target - Basic System. Mar 19 11:50:29.060314 systemd[1896]: Reached target default.target - Main User Target. Mar 19 11:50:29.060338 systemd[1896]: Startup finished in 210ms. Mar 19 11:50:29.060658 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 19 11:50:29.068981 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 19 11:50:29.767742 login[1889]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:50:29.771766 systemd-logind[1713]: New session 2 of user core. Mar 19 11:50:29.781857 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 19 11:50:30.452840 waagent[1884]: 2025-03-19T11:50:30.452744Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 19 11:50:30.459015 waagent[1884]: 2025-03-19T11:50:30.458943Z INFO Daemon Daemon OS: flatcar 4230.1.0 Mar 19 11:50:30.463603 waagent[1884]: 2025-03-19T11:50:30.463545Z INFO Daemon Daemon Python: 3.11.11 Mar 19 11:50:30.468326 waagent[1884]: 2025-03-19T11:50:30.467981Z INFO Daemon Daemon Run daemon Mar 19 11:50:30.472164 waagent[1884]: 2025-03-19T11:50:30.472116Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.1.0' Mar 19 11:50:30.481075 waagent[1884]: 2025-03-19T11:50:30.481010Z INFO Daemon Daemon Using waagent for provisioning Mar 19 11:50:30.486413 waagent[1884]: 2025-03-19T11:50:30.486365Z INFO Daemon Daemon Activate resource disk Mar 19 11:50:30.491927 waagent[1884]: 2025-03-19T11:50:30.491873Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 19 11:50:30.504769 waagent[1884]: 2025-03-19T11:50:30.504682Z INFO Daemon Daemon Found device: None Mar 19 11:50:30.509333 waagent[1884]: 2025-03-19T11:50:30.509279Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 19 11:50:30.517866 waagent[1884]: 2025-03-19T11:50:30.517808Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 19 11:50:30.530073 waagent[1884]: 2025-03-19T11:50:30.530023Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 19 11:50:30.536439 waagent[1884]: 2025-03-19T11:50:30.536387Z INFO Daemon Daemon Running default provisioning handler Mar 19 11:50:30.548146 waagent[1884]: 2025-03-19T11:50:30.548070Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 19 11:50:30.561962 waagent[1884]: 2025-03-19T11:50:30.561896Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 19 11:50:30.571948 waagent[1884]: 2025-03-19T11:50:30.571887Z INFO Daemon Daemon cloud-init is enabled: False Mar 19 11:50:30.576953 waagent[1884]: 2025-03-19T11:50:30.576904Z INFO Daemon Daemon Copying ovf-env.xml Mar 19 11:50:30.594940 waagent[1884]: 2025-03-19T11:50:30.594616Z INFO Daemon Daemon Successfully mounted dvd Mar 19 11:50:30.609120 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 19 11:50:30.612744 waagent[1884]: 2025-03-19T11:50:30.612648Z INFO Daemon Daemon Detect protocol endpoint Mar 19 11:50:30.618792 waagent[1884]: 2025-03-19T11:50:30.617930Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 19 11:50:30.623771 waagent[1884]: 2025-03-19T11:50:30.623697Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 19 11:50:30.631053 waagent[1884]: 2025-03-19T11:50:30.630993Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 19 11:50:30.636438 waagent[1884]: 2025-03-19T11:50:30.636387Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 19 11:50:30.641518 waagent[1884]: 2025-03-19T11:50:30.641466Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 19 11:50:30.690538 waagent[1884]: 2025-03-19T11:50:30.690494Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 19 11:50:30.697352 waagent[1884]: 2025-03-19T11:50:30.697320Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 19 11:50:30.702536 waagent[1884]: 2025-03-19T11:50:30.702487Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 19 11:50:30.788137 waagent[1884]: 2025-03-19T11:50:30.787979Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 19 11:50:30.794952 waagent[1884]: 2025-03-19T11:50:30.794877Z INFO Daemon Daemon Forcing an update of the goal state. Mar 19 11:50:30.804706 waagent[1884]: 2025-03-19T11:50:30.804646Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 19 11:50:30.830192 waagent[1884]: 2025-03-19T11:50:30.830142Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Mar 19 11:50:30.837026 waagent[1884]: 2025-03-19T11:50:30.836969Z INFO Daemon Mar 19 11:50:30.839971 waagent[1884]: 2025-03-19T11:50:30.839922Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 4d47b7b7-108b-4d70-9c3e-a3776df3a4eb eTag: 11323593016214289982 source: Fabric] Mar 19 11:50:30.852413 waagent[1884]: 2025-03-19T11:50:30.852357Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 19 11:50:30.859825 waagent[1884]: 2025-03-19T11:50:30.859773Z INFO Daemon Mar 19 11:50:30.862983 waagent[1884]: 2025-03-19T11:50:30.862934Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 19 11:50:30.875436 waagent[1884]: 2025-03-19T11:50:30.875391Z INFO Daemon Daemon Downloading artifacts profile blob Mar 19 11:50:31.054850 waagent[1884]: 2025-03-19T11:50:31.054680Z INFO Daemon Downloaded certificate {'thumbprint': 'C2DAB7AA55C463EFB5E2595C01E920A1D3307733', 'hasPrivateKey': True} Mar 19 11:50:31.067223 waagent[1884]: 2025-03-19T11:50:31.067166Z INFO Daemon Downloaded certificate {'thumbprint': '505341C5D144B07F7F9B922454327DCE99A38649', 'hasPrivateKey': False} Mar 19 11:50:31.078245 waagent[1884]: 2025-03-19T11:50:31.078189Z INFO Daemon Fetch goal state completed Mar 19 11:50:31.094695 waagent[1884]: 2025-03-19T11:50:31.094639Z INFO Daemon Daemon Starting provisioning Mar 19 11:50:31.099823 waagent[1884]: 2025-03-19T11:50:31.099754Z INFO Daemon Daemon Handle ovf-env.xml. Mar 19 11:50:31.105756 waagent[1884]: 2025-03-19T11:50:31.105683Z INFO Daemon Daemon Set hostname [ci-4230.1.0-a-361b280840] Mar 19 11:50:31.135415 waagent[1884]: 2025-03-19T11:50:31.135332Z INFO Daemon Daemon Publish hostname [ci-4230.1.0-a-361b280840] Mar 19 11:50:31.142108 waagent[1884]: 2025-03-19T11:50:31.142031Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 19 11:50:31.148285 waagent[1884]: 2025-03-19T11:50:31.148223Z INFO Daemon Daemon Primary interface is [eth0] Mar 19 11:50:31.160439 systemd-networkd[1342]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:50:31.160805 systemd-networkd[1342]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:50:31.160834 systemd-networkd[1342]: eth0: DHCP lease lost Mar 19 11:50:31.161566 waagent[1884]: 2025-03-19T11:50:31.161480Z INFO Daemon Daemon Create user account if not exists Mar 19 11:50:31.167257 waagent[1884]: 2025-03-19T11:50:31.167190Z INFO Daemon Daemon User core already exists, skip useradd Mar 19 11:50:31.173435 waagent[1884]: 2025-03-19T11:50:31.173353Z INFO Daemon Daemon Configure sudoer Mar 19 11:50:31.178201 waagent[1884]: 2025-03-19T11:50:31.178134Z INFO Daemon Daemon Configure sshd Mar 19 11:50:31.183126 waagent[1884]: 2025-03-19T11:50:31.183065Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 19 11:50:31.195392 waagent[1884]: 2025-03-19T11:50:31.195313Z INFO Daemon Daemon Deploy ssh public key. Mar 19 11:50:31.209790 systemd-networkd[1342]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 19 11:50:31.294046 waagent[1884]: 2025-03-19T11:50:31.293954Z INFO Daemon Daemon Decode custom data Mar 19 11:50:31.298579 waagent[1884]: 2025-03-19T11:50:31.298518Z INFO Daemon Daemon Save custom data Mar 19 11:50:32.403993 waagent[1884]: 2025-03-19T11:50:32.403937Z INFO Daemon Daemon Provisioning complete Mar 19 11:50:32.419995 waagent[1884]: 2025-03-19T11:50:32.419943Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 19 11:50:32.426825 waagent[1884]: 2025-03-19T11:50:32.426768Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 19 11:50:32.438012 waagent[1884]: 2025-03-19T11:50:32.437958Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 19 11:50:32.568929 waagent[1949]: 2025-03-19T11:50:32.568763Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 19 11:50:32.569234 waagent[1949]: 2025-03-19T11:50:32.568927Z INFO ExtHandler ExtHandler OS: flatcar 4230.1.0 Mar 19 11:50:32.569234 waagent[1949]: 2025-03-19T11:50:32.568985Z INFO ExtHandler ExtHandler Python: 3.11.11 Mar 19 11:50:32.674572 waagent[1949]: 2025-03-19T11:50:32.674417Z INFO ExtHandler ExtHandler Distro: flatcar-4230.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 19 11:50:32.674749 waagent[1949]: 2025-03-19T11:50:32.674686Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 19 11:50:32.674829 waagent[1949]: 2025-03-19T11:50:32.674793Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 19 11:50:32.686302 waagent[1949]: 2025-03-19T11:50:32.686203Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 19 11:50:32.692945 waagent[1949]: 2025-03-19T11:50:32.692906Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Mar 19 11:50:32.693579 waagent[1949]: 2025-03-19T11:50:32.693532Z INFO ExtHandler Mar 19 11:50:32.693728 waagent[1949]: 2025-03-19T11:50:32.693659Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 9f18290a-7d96-4b5e-82a5-d8060b747772 eTag: 11323593016214289982 source: Fabric] Mar 19 11:50:32.694264 waagent[1949]: 2025-03-19T11:50:32.694194Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 19 11:50:32.695149 waagent[1949]: 2025-03-19T11:50:32.695062Z INFO ExtHandler Mar 19 11:50:32.695259 waagent[1949]: 2025-03-19T11:50:32.695228Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 19 11:50:32.699721 waagent[1949]: 2025-03-19T11:50:32.699651Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 19 11:50:32.783697 waagent[1949]: 2025-03-19T11:50:32.783598Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C2DAB7AA55C463EFB5E2595C01E920A1D3307733', 'hasPrivateKey': True} Mar 19 11:50:32.784134 waagent[1949]: 2025-03-19T11:50:32.784090Z INFO ExtHandler Downloaded certificate {'thumbprint': '505341C5D144B07F7F9B922454327DCE99A38649', 'hasPrivateKey': False} Mar 19 11:50:32.784536 waagent[1949]: 2025-03-19T11:50:32.784495Z INFO ExtHandler Fetch goal state completed Mar 19 11:50:32.798459 waagent[1949]: 2025-03-19T11:50:32.798405Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1949 Mar 19 11:50:32.798602 waagent[1949]: 2025-03-19T11:50:32.798560Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 19 11:50:32.800158 waagent[1949]: 2025-03-19T11:50:32.800112Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.1.0', '', 'Flatcar Container Linux by Kinvolk'] Mar 19 11:50:32.800532 waagent[1949]: 2025-03-19T11:50:32.800496Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 19 11:50:32.859193 waagent[1949]: 2025-03-19T11:50:32.859147Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 19 11:50:32.859383 waagent[1949]: 2025-03-19T11:50:32.859343Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 19 11:50:32.865215 waagent[1949]: 2025-03-19T11:50:32.864673Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 19 11:50:32.870700 systemd[1]: Reload requested from client PID 1964 ('systemctl') (unit waagent.service)... Mar 19 11:50:32.870730 systemd[1]: Reloading... Mar 19 11:50:32.962754 zram_generator::config[2001]: No configuration found. Mar 19 11:50:33.065416 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:50:33.162768 systemd[1]: Reloading finished in 291 ms. Mar 19 11:50:33.178189 waagent[1949]: 2025-03-19T11:50:33.177819Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 19 11:50:33.184255 systemd[1]: Reload requested from client PID 2057 ('systemctl') (unit waagent.service)... Mar 19 11:50:33.184268 systemd[1]: Reloading... Mar 19 11:50:33.282749 zram_generator::config[2108]: No configuration found. Mar 19 11:50:33.363486 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:50:33.461236 systemd[1]: Reloading finished in 276 ms. Mar 19 11:50:33.476767 waagent[1949]: 2025-03-19T11:50:33.476033Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 19 11:50:33.476767 waagent[1949]: 2025-03-19T11:50:33.476204Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 19 11:50:33.789440 waagent[1949]: 2025-03-19T11:50:33.789313Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 19 11:50:33.790377 waagent[1949]: 2025-03-19T11:50:33.790313Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 19 11:50:33.791230 waagent[1949]: 2025-03-19T11:50:33.791179Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 19 11:50:33.791372 waagent[1949]: 2025-03-19T11:50:33.791321Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 19 11:50:33.791805 waagent[1949]: 2025-03-19T11:50:33.791746Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 19 11:50:33.791998 waagent[1949]: 2025-03-19T11:50:33.791854Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 19 11:50:33.792259 waagent[1949]: 2025-03-19T11:50:33.792200Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 19 11:50:33.792569 waagent[1949]: 2025-03-19T11:50:33.792473Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 19 11:50:33.792688 waagent[1949]: 2025-03-19T11:50:33.792558Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 19 11:50:33.793195 waagent[1949]: 2025-03-19T11:50:33.793135Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 19 11:50:33.793422 waagent[1949]: 2025-03-19T11:50:33.793299Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 19 11:50:33.793422 waagent[1949]: 2025-03-19T11:50:33.793351Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 19 11:50:33.793773 waagent[1949]: 2025-03-19T11:50:33.793687Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 19 11:50:33.794281 waagent[1949]: 2025-03-19T11:50:33.794229Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 19 11:50:33.794495 waagent[1949]: 2025-03-19T11:50:33.794450Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 19 11:50:33.794495 waagent[1949]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 19 11:50:33.794495 waagent[1949]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 19 11:50:33.794495 waagent[1949]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 19 11:50:33.794495 waagent[1949]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 19 11:50:33.794495 waagent[1949]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 19 11:50:33.794495 waagent[1949]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 19 11:50:33.795001 waagent[1949]: 2025-03-19T11:50:33.794934Z INFO EnvHandler ExtHandler Configure routes Mar 19 11:50:33.795066 waagent[1949]: 2025-03-19T11:50:33.795034Z INFO EnvHandler ExtHandler Gateway:None Mar 19 11:50:33.795113 waagent[1949]: 2025-03-19T11:50:33.795088Z INFO EnvHandler ExtHandler Routes:None Mar 19 11:50:33.800443 waagent[1949]: 2025-03-19T11:50:33.800388Z INFO ExtHandler ExtHandler Mar 19 11:50:33.800933 waagent[1949]: 2025-03-19T11:50:33.800877Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 39b80e55-2417-46b0-b119-dc6ca466239d correlation 0ad86746-6a24-4305-8d87-a65cd761914f created: 2025-03-19T11:49:15.757759Z] Mar 19 11:50:33.802035 waagent[1949]: 2025-03-19T11:50:33.801981Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 19 11:50:33.803744 waagent[1949]: 2025-03-19T11:50:33.803498Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Mar 19 11:50:33.844344 waagent[1949]: 2025-03-19T11:50:33.843841Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6C684E67-92E5-4C7C-9B49-96848D70AC99;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 19 11:50:33.870812 waagent[1949]: 2025-03-19T11:50:33.870707Z INFO MonitorHandler ExtHandler Network interfaces: Mar 19 11:50:33.870812 waagent[1949]: Executing ['ip', '-a', '-o', 'link']: Mar 19 11:50:33.870812 waagent[1949]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 19 11:50:33.870812 waagent[1949]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:78:ab brd ff:ff:ff:ff:ff:ff Mar 19 11:50:33.870812 waagent[1949]: 3: enP35482s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:78:ab brd ff:ff:ff:ff:ff:ff\ altname enP35482p0s2 Mar 19 11:50:33.870812 waagent[1949]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 19 11:50:33.870812 waagent[1949]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 19 11:50:33.870812 waagent[1949]: 2: eth0 inet 10.200.20.11/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 19 11:50:33.870812 waagent[1949]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 19 11:50:33.870812 waagent[1949]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 19 11:50:33.870812 waagent[1949]: 2: eth0 inet6 fe80::222:48ff:fe7b:78ab/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 19 11:50:33.870812 waagent[1949]: 3: enP35482s1 inet6 fe80::222:48ff:fe7b:78ab/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 19 11:50:33.960928 waagent[1949]: 2025-03-19T11:50:33.960849Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 19 11:50:33.960928 waagent[1949]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:50:33.960928 waagent[1949]: pkts bytes target prot opt in out source destination Mar 19 11:50:33.960928 waagent[1949]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:50:33.960928 waagent[1949]: pkts bytes target prot opt in out source destination Mar 19 11:50:33.960928 waagent[1949]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:50:33.960928 waagent[1949]: pkts bytes target prot opt in out source destination Mar 19 11:50:33.960928 waagent[1949]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 19 11:50:33.960928 waagent[1949]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 19 11:50:33.960928 waagent[1949]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 19 11:50:33.963814 waagent[1949]: 2025-03-19T11:50:33.963683Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 19 11:50:33.963814 waagent[1949]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:50:33.963814 waagent[1949]: pkts bytes target prot opt in out source destination Mar 19 11:50:33.963814 waagent[1949]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:50:33.963814 waagent[1949]: pkts bytes target prot opt in out source destination Mar 19 11:50:33.963814 waagent[1949]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:50:33.963814 waagent[1949]: pkts bytes target prot opt in out source destination Mar 19 11:50:33.963814 waagent[1949]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 19 11:50:33.963814 waagent[1949]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 19 11:50:33.963814 waagent[1949]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 19 11:50:33.964031 waagent[1949]: 2025-03-19T11:50:33.963977Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 19 11:50:38.370031 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 19 11:50:38.379971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:50:38.476965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:50:38.481116 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:50:38.516051 kubelet[2192]: E0319 11:50:38.515958 2192 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:50:38.519093 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:50:38.519361 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:50:38.519938 systemd[1]: kubelet.service: Consumed 118ms CPU time, 94.3M memory peak. Mar 19 11:50:48.769917 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 19 11:50:48.777899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:50:48.867481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:50:48.871215 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:50:48.907724 kubelet[2206]: E0319 11:50:48.907621 2206 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:50:48.910161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:50:48.910405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:50:48.910916 systemd[1]: kubelet.service: Consumed 119ms CPU time, 94.7M memory peak. Mar 19 11:50:50.398853 chronyd[1702]: Selected source PHC0 Mar 19 11:50:59.017942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 19 11:50:59.028354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:50:59.116902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:50:59.119788 (kubelet)[2221]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:50:59.211322 kubelet[2221]: E0319 11:50:59.211257 2221 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:50:59.213805 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:50:59.214056 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:50:59.214531 systemd[1]: kubelet.service: Consumed 118ms CPU time, 97M memory peak. Mar 19 11:51:03.794454 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 19 11:51:03.801013 systemd[1]: Started sshd@0-10.200.20.11:22-10.200.16.10:55720.service - OpenSSH per-connection server daemon (10.200.16.10:55720). Mar 19 11:51:04.454056 sshd[2229]: Accepted publickey for core from 10.200.16.10 port 55720 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:51:04.455301 sshd-session[2229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:51:04.459768 systemd-logind[1713]: New session 3 of user core. Mar 19 11:51:04.466872 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 19 11:51:04.885041 systemd[1]: Started sshd@1-10.200.20.11:22-10.200.16.10:55728.service - OpenSSH per-connection server daemon (10.200.16.10:55728). Mar 19 11:51:05.334992 sshd[2234]: Accepted publickey for core from 10.200.16.10 port 55728 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:51:05.336239 sshd-session[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:51:05.341558 systemd-logind[1713]: New session 4 of user core. Mar 19 11:51:05.345926 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 19 11:51:05.669654 sshd[2236]: Connection closed by 10.200.16.10 port 55728 Mar 19 11:51:05.671819 sshd-session[2234]: pam_unix(sshd:session): session closed for user core Mar 19 11:51:05.674323 systemd[1]: sshd@1-10.200.20.11:22-10.200.16.10:55728.service: Deactivated successfully. Mar 19 11:51:05.676062 systemd[1]: session-4.scope: Deactivated successfully. Mar 19 11:51:05.677523 systemd-logind[1713]: Session 4 logged out. Waiting for processes to exit. Mar 19 11:51:05.678644 systemd-logind[1713]: Removed session 4. Mar 19 11:51:05.757962 systemd[1]: Started sshd@2-10.200.20.11:22-10.200.16.10:55740.service - OpenSSH per-connection server daemon (10.200.16.10:55740). Mar 19 11:51:06.203807 sshd[2242]: Accepted publickey for core from 10.200.16.10 port 55740 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:51:06.205100 sshd-session[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:51:06.209540 systemd-logind[1713]: New session 5 of user core. Mar 19 11:51:06.215908 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 19 11:51:06.521811 sshd[2244]: Connection closed by 10.200.16.10 port 55740 Mar 19 11:51:06.522323 sshd-session[2242]: pam_unix(sshd:session): session closed for user core Mar 19 11:51:06.525675 systemd[1]: sshd@2-10.200.20.11:22-10.200.16.10:55740.service: Deactivated successfully. Mar 19 11:51:06.527227 systemd[1]: session-5.scope: Deactivated successfully. Mar 19 11:51:06.527873 systemd-logind[1713]: Session 5 logged out. Waiting for processes to exit. Mar 19 11:51:06.528880 systemd-logind[1713]: Removed session 5. Mar 19 11:51:06.612030 systemd[1]: Started sshd@3-10.200.20.11:22-10.200.16.10:55752.service - OpenSSH per-connection server daemon (10.200.16.10:55752). Mar 19 11:51:07.054815 sshd[2250]: Accepted publickey for core from 10.200.16.10 port 55752 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:51:07.056021 sshd-session[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:51:07.061244 systemd-logind[1713]: New session 6 of user core. Mar 19 11:51:07.065840 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 19 11:51:07.376588 sshd[2252]: Connection closed by 10.200.16.10 port 55752 Mar 19 11:51:07.377111 sshd-session[2250]: pam_unix(sshd:session): session closed for user core Mar 19 11:51:07.379929 systemd-logind[1713]: Session 6 logged out. Waiting for processes to exit. Mar 19 11:51:07.379931 systemd[1]: session-6.scope: Deactivated successfully. Mar 19 11:51:07.381238 systemd[1]: sshd@3-10.200.20.11:22-10.200.16.10:55752.service: Deactivated successfully. Mar 19 11:51:07.383214 systemd-logind[1713]: Removed session 6. Mar 19 11:51:07.464014 systemd[1]: Started sshd@4-10.200.20.11:22-10.200.16.10:55768.service - OpenSSH per-connection server daemon (10.200.16.10:55768). Mar 19 11:51:07.947520 sshd[2258]: Accepted publickey for core from 10.200.16.10 port 55768 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:51:07.948823 sshd-session[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:51:07.952931 systemd-logind[1713]: New session 7 of user core. Mar 19 11:51:07.963922 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 19 11:51:08.375678 sudo[2261]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 19 11:51:08.375984 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:51:08.420685 sudo[2261]: pam_unix(sudo:session): session closed for user root Mar 19 11:51:08.496682 sshd[2260]: Connection closed by 10.200.16.10 port 55768 Mar 19 11:51:08.497412 sshd-session[2258]: pam_unix(sshd:session): session closed for user core Mar 19 11:51:08.500863 systemd-logind[1713]: Session 7 logged out. Waiting for processes to exit. Mar 19 11:51:08.502254 systemd[1]: sshd@4-10.200.20.11:22-10.200.16.10:55768.service: Deactivated successfully. Mar 19 11:51:08.503925 systemd[1]: session-7.scope: Deactivated successfully. Mar 19 11:51:08.504854 systemd-logind[1713]: Removed session 7. Mar 19 11:51:08.587983 systemd[1]: Started sshd@5-10.200.20.11:22-10.200.16.10:53066.service - OpenSSH per-connection server daemon (10.200.16.10:53066). Mar 19 11:51:09.032184 sshd[2267]: Accepted publickey for core from 10.200.16.10 port 53066 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:51:09.033564 sshd-session[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:51:09.039327 systemd-logind[1713]: New session 8 of user core. Mar 19 11:51:09.044960 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 19 11:51:09.267854 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 19 11:51:09.278432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:51:09.286951 sudo[2274]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 19 11:51:09.287226 sudo[2274]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:51:09.291520 sudo[2274]: pam_unix(sudo:session): session closed for user root Mar 19 11:51:09.296899 sudo[2271]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 19 11:51:09.297500 sudo[2271]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:51:09.316049 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:51:09.344147 augenrules[2296]: No rules Mar 19 11:51:09.345852 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:51:09.346055 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:51:09.347343 sudo[2271]: pam_unix(sudo:session): session closed for user root Mar 19 11:51:09.374909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:51:09.387080 (kubelet)[2306]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:51:09.417582 sshd[2269]: Connection closed by 10.200.16.10 port 53066 Mar 19 11:51:09.419580 sshd-session[2267]: pam_unix(sshd:session): session closed for user core Mar 19 11:51:09.422668 systemd-logind[1713]: Session 8 logged out. Waiting for processes to exit. Mar 19 11:51:09.423359 systemd[1]: sshd@5-10.200.20.11:22-10.200.16.10:53066.service: Deactivated successfully. Mar 19 11:51:09.426705 kubelet[2306]: E0319 11:51:09.425042 2306 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:51:09.427351 systemd[1]: session-8.scope: Deactivated successfully. Mar 19 11:51:09.429136 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:51:09.429272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:51:09.429518 systemd[1]: kubelet.service: Consumed 123ms CPU time, 96.6M memory peak. Mar 19 11:51:09.431705 systemd-logind[1713]: Removed session 8. Mar 19 11:51:09.513976 systemd[1]: Started sshd@6-10.200.20.11:22-10.200.16.10:53068.service - OpenSSH per-connection server daemon (10.200.16.10:53068). Mar 19 11:51:09.825202 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 19 11:51:10.002799 sshd[2317]: Accepted publickey for core from 10.200.16.10 port 53068 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:51:10.004054 sshd-session[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:51:10.008403 systemd-logind[1713]: New session 9 of user core. Mar 19 11:51:10.013858 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 19 11:51:10.277374 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 19 11:51:10.278011 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:51:11.618954 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 19 11:51:11.619102 (dockerd)[2336]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 19 11:51:12.136002 update_engine[1718]: I20250319 11:51:12.135922 1718 update_attempter.cc:509] Updating boot flags... Mar 19 11:51:12.208766 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2356) Mar 19 11:51:12.347345 dockerd[2336]: time="2025-03-19T11:51:12.347295555Z" level=info msg="Starting up" Mar 19 11:51:12.369806 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2359) Mar 19 11:51:12.711243 dockerd[2336]: time="2025-03-19T11:51:12.711028471Z" level=info msg="Loading containers: start." Mar 19 11:51:12.945744 kernel: Initializing XFRM netlink socket Mar 19 11:51:13.131128 systemd-networkd[1342]: docker0: Link UP Mar 19 11:51:13.178052 dockerd[2336]: time="2025-03-19T11:51:13.178005139Z" level=info msg="Loading containers: done." Mar 19 11:51:13.197502 dockerd[2336]: time="2025-03-19T11:51:13.197444095Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 19 11:51:13.197661 dockerd[2336]: time="2025-03-19T11:51:13.197568975Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 19 11:51:13.197729 dockerd[2336]: time="2025-03-19T11:51:13.197700775Z" level=info msg="Daemon has completed initialization" Mar 19 11:51:13.251178 dockerd[2336]: time="2025-03-19T11:51:13.251031394Z" level=info msg="API listen on /run/docker.sock" Mar 19 11:51:13.251421 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 19 11:51:14.155285 containerd[1740]: time="2025-03-19T11:51:14.155241675Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 19 11:51:15.270258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540063650.mount: Deactivated successfully. Mar 19 11:51:16.859759 containerd[1740]: time="2025-03-19T11:51:16.858756927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:16.862250 containerd[1740]: time="2025-03-19T11:51:16.862023531Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=25552766" Mar 19 11:51:16.867731 containerd[1740]: time="2025-03-19T11:51:16.865952535Z" level=info msg="ImageCreate event name:\"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:16.872476 containerd[1740]: time="2025-03-19T11:51:16.872430861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:16.873655 containerd[1740]: time="2025-03-19T11:51:16.873618502Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"25549566\" in 2.718333987s" Mar 19 11:51:16.873655 containerd[1740]: time="2025-03-19T11:51:16.873654423Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\"" Mar 19 11:51:16.874577 containerd[1740]: time="2025-03-19T11:51:16.874553463Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 19 11:51:18.887580 containerd[1740]: time="2025-03-19T11:51:18.887492662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:18.892783 containerd[1740]: time="2025-03-19T11:51:18.892428427Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=22458978" Mar 19 11:51:18.899390 containerd[1740]: time="2025-03-19T11:51:18.899359474Z" level=info msg="ImageCreate event name:\"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:18.907240 containerd[1740]: time="2025-03-19T11:51:18.907185802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:18.908641 containerd[1740]: time="2025-03-19T11:51:18.908454083Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"23899774\" in 2.033783179s" Mar 19 11:51:18.908641 containerd[1740]: time="2025-03-19T11:51:18.908515563Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\"" Mar 19 11:51:18.909488 containerd[1740]: time="2025-03-19T11:51:18.909284964Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 19 11:51:19.517703 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 19 11:51:19.526896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:51:19.616572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:51:19.620048 (kubelet)[2698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:51:19.655680 kubelet[2698]: E0319 11:51:19.655619 2698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:51:19.658053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:51:19.658307 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:51:19.658847 systemd[1]: kubelet.service: Consumed 115ms CPU time, 92.4M memory peak. Mar 19 11:51:20.831896 containerd[1740]: time="2025-03-19T11:51:20.830998430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:20.833385 containerd[1740]: time="2025-03-19T11:51:20.833344635Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=17125829" Mar 19 11:51:20.844145 containerd[1740]: time="2025-03-19T11:51:20.844095174Z" level=info msg="ImageCreate event name:\"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:20.850974 containerd[1740]: time="2025-03-19T11:51:20.850925786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:20.852156 containerd[1740]: time="2025-03-19T11:51:20.852029428Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"18566643\" in 1.942715424s" Mar 19 11:51:20.852156 containerd[1740]: time="2025-03-19T11:51:20.852064908Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\"" Mar 19 11:51:20.852977 containerd[1740]: time="2025-03-19T11:51:20.852764109Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 19 11:51:22.357001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1617341914.mount: Deactivated successfully. Mar 19 11:51:22.714815 containerd[1740]: time="2025-03-19T11:51:22.714071777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:22.717433 containerd[1740]: time="2025-03-19T11:51:22.717391062Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=26871915" Mar 19 11:51:22.720239 containerd[1740]: time="2025-03-19T11:51:22.720195347Z" level=info msg="ImageCreate event name:\"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:22.723809 containerd[1740]: time="2025-03-19T11:51:22.723772473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:22.724426 containerd[1740]: time="2025-03-19T11:51:22.724241594Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"26870934\" in 1.871447885s" Mar 19 11:51:22.724426 containerd[1740]: time="2025-03-19T11:51:22.724278794Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 19 11:51:22.725339 containerd[1740]: time="2025-03-19T11:51:22.725132355Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 19 11:51:23.509837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777680430.mount: Deactivated successfully. Mar 19 11:51:24.851264 containerd[1740]: time="2025-03-19T11:51:24.851207910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:24.857734 containerd[1740]: time="2025-03-19T11:51:24.857596881Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Mar 19 11:51:24.942028 containerd[1740]: time="2025-03-19T11:51:24.941969822Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:24.948636 containerd[1740]: time="2025-03-19T11:51:24.948578113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:24.949771 containerd[1740]: time="2025-03-19T11:51:24.949608355Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.22444464s" Mar 19 11:51:24.949771 containerd[1740]: time="2025-03-19T11:51:24.949642755Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 19 11:51:24.950279 containerd[1740]: time="2025-03-19T11:51:24.950235876Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 19 11:51:25.626355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3160804790.mount: Deactivated successfully. Mar 19 11:51:25.668791 containerd[1740]: time="2025-03-19T11:51:25.667932396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:25.670836 containerd[1740]: time="2025-03-19T11:51:25.670790281Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 19 11:51:25.677852 containerd[1740]: time="2025-03-19T11:51:25.677823812Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:25.685909 containerd[1740]: time="2025-03-19T11:51:25.685841746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:25.686744 containerd[1740]: time="2025-03-19T11:51:25.686595707Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 736.315551ms" Mar 19 11:51:25.686744 containerd[1740]: time="2025-03-19T11:51:25.686628987Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 19 11:51:25.687491 containerd[1740]: time="2025-03-19T11:51:25.687303748Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 19 11:51:26.373871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount406413558.mount: Deactivated successfully. Mar 19 11:51:29.767827 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 19 11:51:29.775891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:51:30.715778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:51:30.731986 (kubelet)[2826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:51:30.765442 kubelet[2826]: E0319 11:51:30.765370 2826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:51:30.767843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:51:30.768109 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:51:30.769850 systemd[1]: kubelet.service: Consumed 113ms CPU time, 94.3M memory peak. Mar 19 11:51:31.436748 containerd[1740]: time="2025-03-19T11:51:31.436662971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:31.439396 containerd[1740]: time="2025-03-19T11:51:31.439326415Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Mar 19 11:51:31.448803 containerd[1740]: time="2025-03-19T11:51:31.448721431Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:31.456620 containerd[1740]: time="2025-03-19T11:51:31.456550885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:31.458278 containerd[1740]: time="2025-03-19T11:51:31.457931687Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 5.770578779s" Mar 19 11:51:31.458278 containerd[1740]: time="2025-03-19T11:51:31.457973927Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Mar 19 11:51:37.085873 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:51:37.086440 systemd[1]: kubelet.service: Consumed 113ms CPU time, 94.3M memory peak. Mar 19 11:51:37.092062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:51:37.117971 systemd[1]: Reload requested from client PID 2860 ('systemctl') (unit session-9.scope)... Mar 19 11:51:37.117986 systemd[1]: Reloading... Mar 19 11:51:37.249812 zram_generator::config[2922]: No configuration found. Mar 19 11:51:37.328691 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:51:37.427474 systemd[1]: Reloading finished in 309 ms. Mar 19 11:51:37.474926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:51:37.477295 (kubelet)[2964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:51:37.484254 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:51:37.485021 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:51:37.485305 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:51:37.485344 systemd[1]: kubelet.service: Consumed 90ms CPU time, 86.1M memory peak. Mar 19 11:51:37.493312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:51:37.588405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:51:37.592685 (kubelet)[2981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:51:37.626258 kubelet[2981]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:51:37.626584 kubelet[2981]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 11:51:37.626629 kubelet[2981]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:51:37.626821 kubelet[2981]: I0319 11:51:37.626787 2981 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:51:39.111763 kubelet[2981]: I0319 11:51:39.111169 2981 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 19 11:51:39.111763 kubelet[2981]: I0319 11:51:39.111200 2981 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:51:39.111763 kubelet[2981]: I0319 11:51:39.111426 2981 server.go:929] "Client rotation is on, will bootstrap in background" Mar 19 11:51:39.130556 kubelet[2981]: E0319 11:51:39.130511 2981 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:39.131388 kubelet[2981]: I0319 11:51:39.131276 2981 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:51:39.138236 kubelet[2981]: E0319 11:51:39.138198 2981 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:51:39.138236 kubelet[2981]: I0319 11:51:39.138233 2981 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:51:39.141921 kubelet[2981]: I0319 11:51:39.141896 2981 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:51:39.142579 kubelet[2981]: I0319 11:51:39.142559 2981 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 19 11:51:39.142746 kubelet[2981]: I0319 11:51:39.142702 2981 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:51:39.142911 kubelet[2981]: I0319 11:51:39.142746 2981 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.0-a-361b280840","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:51:39.143000 kubelet[2981]: I0319 11:51:39.142918 2981 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:51:39.143000 kubelet[2981]: I0319 11:51:39.142927 2981 container_manager_linux.go:300] "Creating device plugin manager" Mar 19 11:51:39.143059 kubelet[2981]: I0319 11:51:39.143037 2981 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:51:39.144511 kubelet[2981]: I0319 11:51:39.144490 2981 kubelet.go:408] "Attempting to sync node with API server" Mar 19 11:51:39.144537 kubelet[2981]: I0319 11:51:39.144520 2981 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:51:39.144557 kubelet[2981]: I0319 11:51:39.144546 2981 kubelet.go:314] "Adding apiserver pod source" Mar 19 11:51:39.144557 kubelet[2981]: I0319 11:51:39.144556 2981 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:51:39.149051 kubelet[2981]: W0319 11:51:39.148790 2981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-361b280840&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Mar 19 11:51:39.149051 kubelet[2981]: E0319 11:51:39.148843 2981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-361b280840&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:39.150699 kubelet[2981]: W0319 11:51:39.150486 2981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Mar 19 11:51:39.150699 kubelet[2981]: E0319 11:51:39.150537 2981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:39.150930 kubelet[2981]: I0319 11:51:39.150912 2981 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:51:39.152998 kubelet[2981]: I0319 11:51:39.152980 2981 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:51:39.153617 kubelet[2981]: W0319 11:51:39.153602 2981 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 19 11:51:39.155058 kubelet[2981]: I0319 11:51:39.155034 2981 server.go:1269] "Started kubelet" Mar 19 11:51:39.157137 kubelet[2981]: I0319 11:51:39.157001 2981 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:51:39.161240 kubelet[2981]: E0319 11:51:39.160182 2981 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.0-a-361b280840.182e320421118347 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-a-361b280840,UID:ci-4230.1.0-a-361b280840,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-a-361b280840,},FirstTimestamp:2025-03-19 11:51:39.155014471 +0000 UTC m=+1.559346855,LastTimestamp:2025-03-19 11:51:39.155014471 +0000 UTC m=+1.559346855,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-a-361b280840,}" Mar 19 11:51:39.164671 kubelet[2981]: I0319 11:51:39.164164 2981 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 19 11:51:39.164671 kubelet[2981]: I0319 11:51:39.164348 2981 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:51:39.164671 kubelet[2981]: E0319 11:51:39.164355 2981 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-361b280840\" not found" Mar 19 11:51:39.165003 kubelet[2981]: I0319 11:51:39.164967 2981 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:51:39.165911 kubelet[2981]: I0319 11:51:39.165880 2981 server.go:460] "Adding debug handlers to kubelet server" Mar 19 11:51:39.166748 kubelet[2981]: I0319 11:51:39.166640 2981 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:51:39.166954 kubelet[2981]: I0319 11:51:39.166926 2981 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:51:39.167042 kubelet[2981]: I0319 11:51:39.167030 2981 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 19 11:51:39.167800 kubelet[2981]: I0319 11:51:39.167661 2981 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:51:39.167953 kubelet[2981]: E0319 11:51:39.167907 2981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-361b280840?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="200ms" Mar 19 11:51:39.168062 kubelet[2981]: I0319 11:51:39.168039 2981 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:51:39.168135 kubelet[2981]: I0319 11:51:39.168114 2981 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:51:39.168524 kubelet[2981]: E0319 11:51:39.168494 2981 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:51:39.169240 kubelet[2981]: W0319 11:51:39.169113 2981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Mar 19 11:51:39.169562 kubelet[2981]: E0319 11:51:39.169170 2981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:39.170258 kubelet[2981]: I0319 11:51:39.170215 2981 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:51:39.185669 kubelet[2981]: I0319 11:51:39.185637 2981 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 19 11:51:39.185669 kubelet[2981]: I0319 11:51:39.185656 2981 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 19 11:51:39.185669 kubelet[2981]: I0319 11:51:39.185676 2981 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:51:39.190936 kubelet[2981]: I0319 11:51:39.190499 2981 policy_none.go:49] "None policy: Start" Mar 19 11:51:39.192142 kubelet[2981]: I0319 11:51:39.191960 2981 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 11:51:39.192142 kubelet[2981]: I0319 11:51:39.192077 2981 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:51:39.193895 kubelet[2981]: I0319 11:51:39.193871 2981 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:51:39.194911 kubelet[2981]: I0319 11:51:39.194885 2981 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:51:39.194911 kubelet[2981]: I0319 11:51:39.194910 2981 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 11:51:39.196337 kubelet[2981]: I0319 11:51:39.194927 2981 kubelet.go:2321] "Starting kubelet main sync loop" Mar 19 11:51:39.196337 kubelet[2981]: E0319 11:51:39.194962 2981 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:51:39.198543 kubelet[2981]: W0319 11:51:39.198265 2981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Mar 19 11:51:39.198543 kubelet[2981]: E0319 11:51:39.198304 2981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:39.202642 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 19 11:51:39.211414 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 19 11:51:39.214438 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 19 11:51:39.221649 kubelet[2981]: I0319 11:51:39.221476 2981 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:51:39.221765 kubelet[2981]: I0319 11:51:39.221663 2981 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:51:39.221765 kubelet[2981]: I0319 11:51:39.221673 2981 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:51:39.222431 kubelet[2981]: I0319 11:51:39.222108 2981 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:51:39.223553 kubelet[2981]: E0319 11:51:39.223517 2981 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.0-a-361b280840\" not found" Mar 19 11:51:39.305583 systemd[1]: Created slice kubepods-burstable-pod58d6137f1d8fb5c8cfe1457f4dfbe92b.slice - libcontainer container kubepods-burstable-pod58d6137f1d8fb5c8cfe1457f4dfbe92b.slice. Mar 19 11:51:39.323810 kubelet[2981]: I0319 11:51:39.323766 2981 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-a-361b280840" Mar 19 11:51:39.324535 kubelet[2981]: E0319 11:51:39.324125 2981 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4230.1.0-a-361b280840" Mar 19 11:51:39.327006 systemd[1]: Created slice kubepods-burstable-podc4c20bd207d106ff3a37d45f08bafe9a.slice - libcontainer container kubepods-burstable-podc4c20bd207d106ff3a37d45f08bafe9a.slice. Mar 19 11:51:39.341351 systemd[1]: Created slice kubepods-burstable-pod32adf2f70d4321d4ad1b1b15207b07a4.slice - libcontainer container kubepods-burstable-pod32adf2f70d4321d4ad1b1b15207b07a4.slice. Mar 19 11:51:39.369071 kubelet[2981]: E0319 11:51:39.368415 2981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-361b280840?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="400ms" Mar 19 11:51:39.369808 kubelet[2981]: I0319 11:51:39.369769 2981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4c20bd207d106ff3a37d45f08bafe9a-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-361b280840\" (UID: \"c4c20bd207d106ff3a37d45f08bafe9a\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-361b280840" Mar 19 11:51:39.369888 kubelet[2981]: I0319 11:51:39.369810 2981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4c20bd207d106ff3a37d45f08bafe9a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.0-a-361b280840\" (UID: \"c4c20bd207d106ff3a37d45f08bafe9a\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-361b280840" Mar 19 11:51:39.369888 kubelet[2981]: I0319 11:51:39.369829 2981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32adf2f70d4321d4ad1b1b15207b07a4-kubeconfig\") pod \"kube-scheduler-ci-4230.1.0-a-361b280840\" (UID: \"32adf2f70d4321d4ad1b1b15207b07a4\") " pod="kube-system/kube-scheduler-ci-4230.1.0-a-361b280840" Mar 19 11:51:39.369888 kubelet[2981]: I0319 11:51:39.369844 2981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58d6137f1d8fb5c8cfe1457f4dfbe92b-ca-certs\") pod \"kube-apiserver-ci-4230.1.0-a-361b280840\" (UID: \"58d6137f1d8fb5c8cfe1457f4dfbe92b\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-361b280840" Mar 19 11:51:39.369888 kubelet[2981]: I0319 11:51:39.369860 2981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58d6137f1d8fb5c8cfe1457f4dfbe92b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.0-a-361b280840\" (UID: \"58d6137f1d8fb5c8cfe1457f4dfbe92b\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-361b280840" Mar 19 11:51:39.369888 kubelet[2981]: I0319 11:51:39.369874 2981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4c20bd207d106ff3a37d45f08bafe9a-ca-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-361b280840\" (UID: \"c4c20bd207d106ff3a37d45f08bafe9a\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-361b280840" Mar 19 11:51:39.370019 kubelet[2981]: I0319 11:51:39.369890 2981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c4c20bd207d106ff3a37d45f08bafe9a-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.0-a-361b280840\" (UID: \"c4c20bd207d106ff3a37d45f08bafe9a\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-361b280840" Mar 19 11:51:39.370019 kubelet[2981]: I0319 11:51:39.369904 2981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4c20bd207d106ff3a37d45f08bafe9a-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.0-a-361b280840\" (UID: \"c4c20bd207d106ff3a37d45f08bafe9a\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-361b280840" Mar 19 11:51:39.370019 kubelet[2981]: I0319 11:51:39.369918 2981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58d6137f1d8fb5c8cfe1457f4dfbe92b-k8s-certs\") pod \"kube-apiserver-ci-4230.1.0-a-361b280840\" (UID: \"58d6137f1d8fb5c8cfe1457f4dfbe92b\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-361b280840" Mar 19 11:51:39.526502 kubelet[2981]: I0319 11:51:39.526472 2981 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-a-361b280840" Mar 19 11:51:39.526845 kubelet[2981]: E0319 11:51:39.526798 2981 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4230.1.0-a-361b280840" Mar 19 11:51:39.626464 containerd[1740]: time="2025-03-19T11:51:39.626086767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.0-a-361b280840,Uid:58d6137f1d8fb5c8cfe1457f4dfbe92b,Namespace:kube-system,Attempt:0,}" Mar 19 11:51:39.630425 containerd[1740]: time="2025-03-19T11:51:39.630275335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.0-a-361b280840,Uid:c4c20bd207d106ff3a37d45f08bafe9a,Namespace:kube-system,Attempt:0,}" Mar 19 11:51:39.644334 containerd[1740]: time="2025-03-19T11:51:39.644267719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.0-a-361b280840,Uid:32adf2f70d4321d4ad1b1b15207b07a4,Namespace:kube-system,Attempt:0,}" Mar 19 11:51:39.769007 kubelet[2981]: E0319 11:51:39.768886 2981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-361b280840?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="800ms" Mar 19 11:51:39.934051 kubelet[2981]: I0319 11:51:39.933620 2981 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-a-361b280840" Mar 19 11:51:39.934051 kubelet[2981]: E0319 11:51:39.933939 2981 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4230.1.0-a-361b280840" Mar 19 11:51:39.991929 kubelet[2981]: W0319 11:51:39.991866 2981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Mar 19 11:51:39.992062 kubelet[2981]: E0319 11:51:39.991938 2981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:40.224826 kubelet[2981]: W0319 11:51:40.224622 2981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-361b280840&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Mar 19 11:51:40.224826 kubelet[2981]: E0319 11:51:40.224689 2981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-361b280840&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:40.261153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2155981951.mount: Deactivated successfully. Mar 19 11:51:40.270491 kubelet[2981]: W0319 11:51:40.270390 2981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Mar 19 11:51:40.270491 kubelet[2981]: E0319 11:51:40.270458 2981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:40.297293 containerd[1740]: time="2025-03-19T11:51:40.297242091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:51:40.319356 containerd[1740]: time="2025-03-19T11:51:40.319283449Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 19 11:51:40.322828 containerd[1740]: time="2025-03-19T11:51:40.322796015Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:51:40.327739 containerd[1740]: time="2025-03-19T11:51:40.327642104Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:51:40.331534 containerd[1740]: time="2025-03-19T11:51:40.330827029Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:51:40.338945 containerd[1740]: time="2025-03-19T11:51:40.338892163Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:51:40.342461 containerd[1740]: time="2025-03-19T11:51:40.342392969Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:51:40.352910 containerd[1740]: time="2025-03-19T11:51:40.352850667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:51:40.354154 containerd[1740]: time="2025-03-19T11:51:40.353676869Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 727.516061ms" Mar 19 11:51:40.361176 containerd[1740]: time="2025-03-19T11:51:40.361131322Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 730.786107ms" Mar 19 11:51:40.368298 containerd[1740]: time="2025-03-19T11:51:40.368230054Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 723.883415ms" Mar 19 11:51:40.485438 kubelet[2981]: W0319 11:51:40.485296 2981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Mar 19 11:51:40.485438 kubelet[2981]: E0319 11:51:40.485370 2981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:40.569527 kubelet[2981]: E0319 11:51:40.569480 2981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-361b280840?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="1.6s" Mar 19 11:51:40.736046 kubelet[2981]: I0319 11:51:40.735948 2981 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-a-361b280840" Mar 19 11:51:40.736353 kubelet[2981]: E0319 11:51:40.736272 2981 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4230.1.0-a-361b280840" Mar 19 11:51:41.168832 kubelet[2981]: E0319 11:51:41.168786 2981 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:51:41.641796 containerd[1740]: time="2025-03-19T11:51:41.641554822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:51:41.641796 containerd[1740]: time="2025-03-19T11:51:41.641616862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:51:41.641796 containerd[1740]: time="2025-03-19T11:51:41.641631182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:41.641796 containerd[1740]: time="2025-03-19T11:51:41.641703222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:41.642642 containerd[1740]: time="2025-03-19T11:51:41.641900022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:51:41.642642 containerd[1740]: time="2025-03-19T11:51:41.641950782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:51:41.642642 containerd[1740]: time="2025-03-19T11:51:41.641966382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:41.643102 containerd[1740]: time="2025-03-19T11:51:41.642027783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:41.655194 containerd[1740]: time="2025-03-19T11:51:41.654980965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:51:41.655333 containerd[1740]: time="2025-03-19T11:51:41.655182805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:51:41.655333 containerd[1740]: time="2025-03-19T11:51:41.655200045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:41.655459 containerd[1740]: time="2025-03-19T11:51:41.655421086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:41.681170 systemd[1]: Started cri-containerd-8e8ceccaaf30a728a1a17fb4b25208c45d20eb450835d59604c008efe714cfa4.scope - libcontainer container 8e8ceccaaf30a728a1a17fb4b25208c45d20eb450835d59604c008efe714cfa4. Mar 19 11:51:41.685484 systemd[1]: Started cri-containerd-05bc45994360657418c9ed4e1473e1922e11f488c44e8ab7537095b2eea07f71.scope - libcontainer container 05bc45994360657418c9ed4e1473e1922e11f488c44e8ab7537095b2eea07f71. Mar 19 11:51:41.692207 systemd[1]: Started cri-containerd-f13946e7a2ddb11125f7d9bfac87f8ac1dca5b437e16268597c695e4ee429ec9.scope - libcontainer container f13946e7a2ddb11125f7d9bfac87f8ac1dca5b437e16268597c695e4ee429ec9. Mar 19 11:51:41.734382 containerd[1740]: time="2025-03-19T11:51:41.734162382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.0-a-361b280840,Uid:58d6137f1d8fb5c8cfe1457f4dfbe92b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e8ceccaaf30a728a1a17fb4b25208c45d20eb450835d59604c008efe714cfa4\"" Mar 19 11:51:41.741667 containerd[1740]: time="2025-03-19T11:51:41.741543795Z" level=info msg="CreateContainer within sandbox \"8e8ceccaaf30a728a1a17fb4b25208c45d20eb450835d59604c008efe714cfa4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 19 11:51:41.744529 containerd[1740]: time="2025-03-19T11:51:41.744484320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.0-a-361b280840,Uid:32adf2f70d4321d4ad1b1b15207b07a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"05bc45994360657418c9ed4e1473e1922e11f488c44e8ab7537095b2eea07f71\"" Mar 19 11:51:41.747129 containerd[1740]: time="2025-03-19T11:51:41.747089725Z" level=info msg="CreateContainer within sandbox \"05bc45994360657418c9ed4e1473e1922e11f488c44e8ab7537095b2eea07f71\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 19 11:51:41.749905 containerd[1740]: time="2025-03-19T11:51:41.749867609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.0-a-361b280840,Uid:c4c20bd207d106ff3a37d45f08bafe9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f13946e7a2ddb11125f7d9bfac87f8ac1dca5b437e16268597c695e4ee429ec9\"" Mar 19 11:51:41.753415 containerd[1740]: time="2025-03-19T11:51:41.753376616Z" level=info msg="CreateContainer within sandbox \"f13946e7a2ddb11125f7d9bfac87f8ac1dca5b437e16268597c695e4ee429ec9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 19 11:51:41.858937 containerd[1740]: time="2025-03-19T11:51:41.858531958Z" level=info msg="CreateContainer within sandbox \"8e8ceccaaf30a728a1a17fb4b25208c45d20eb450835d59604c008efe714cfa4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5569b49ffcf363bfe3fb5720dc9643725becfc786666e0bf35be842410751360\"" Mar 19 11:51:41.861954 containerd[1740]: time="2025-03-19T11:51:41.861916404Z" level=info msg="CreateContainer within sandbox \"05bc45994360657418c9ed4e1473e1922e11f488c44e8ab7537095b2eea07f71\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"83b12bbcccfda1383c775529601a46632d3e8c4e09996f59b348fc47462a7945\"" Mar 19 11:51:41.862273 containerd[1740]: time="2025-03-19T11:51:41.862199684Z" level=info msg="StartContainer for \"5569b49ffcf363bfe3fb5720dc9643725becfc786666e0bf35be842410751360\"" Mar 19 11:51:41.868128 containerd[1740]: time="2025-03-19T11:51:41.868063694Z" level=info msg="CreateContainer within sandbox \"f13946e7a2ddb11125f7d9bfac87f8ac1dca5b437e16268597c695e4ee429ec9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ed6ff384f3705733e4737518f246157d8dddb94440c2ee22ead76f16bf00c16c\"" Mar 19 11:51:41.868477 containerd[1740]: time="2025-03-19T11:51:41.868455935Z" level=info msg="StartContainer for \"83b12bbcccfda1383c775529601a46632d3e8c4e09996f59b348fc47462a7945\"" Mar 19 11:51:41.874311 containerd[1740]: time="2025-03-19T11:51:41.873726224Z" level=info msg="StartContainer for \"ed6ff384f3705733e4737518f246157d8dddb94440c2ee22ead76f16bf00c16c\"" Mar 19 11:51:41.893909 systemd[1]: Started cri-containerd-5569b49ffcf363bfe3fb5720dc9643725becfc786666e0bf35be842410751360.scope - libcontainer container 5569b49ffcf363bfe3fb5720dc9643725becfc786666e0bf35be842410751360. Mar 19 11:51:41.907938 systemd[1]: Started cri-containerd-83b12bbcccfda1383c775529601a46632d3e8c4e09996f59b348fc47462a7945.scope - libcontainer container 83b12bbcccfda1383c775529601a46632d3e8c4e09996f59b348fc47462a7945. Mar 19 11:51:41.912675 systemd[1]: Started cri-containerd-ed6ff384f3705733e4737518f246157d8dddb94440c2ee22ead76f16bf00c16c.scope - libcontainer container ed6ff384f3705733e4737518f246157d8dddb94440c2ee22ead76f16bf00c16c. Mar 19 11:51:41.926230 kubelet[2981]: E0319 11:51:41.926033 2981 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.0-a-361b280840.182e320421118347 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-a-361b280840,UID:ci-4230.1.0-a-361b280840,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-a-361b280840,},FirstTimestamp:2025-03-19 11:51:39.155014471 +0000 UTC m=+1.559346855,LastTimestamp:2025-03-19 11:51:39.155014471 +0000 UTC m=+1.559346855,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-a-361b280840,}" Mar 19 11:51:41.959076 containerd[1740]: time="2025-03-19T11:51:41.958744612Z" level=info msg="StartContainer for \"5569b49ffcf363bfe3fb5720dc9643725becfc786666e0bf35be842410751360\" returns successfully" Mar 19 11:51:41.965001 containerd[1740]: time="2025-03-19T11:51:41.964930782Z" level=info msg="StartContainer for \"ed6ff384f3705733e4737518f246157d8dddb94440c2ee22ead76f16bf00c16c\" returns successfully" Mar 19 11:51:41.980747 containerd[1740]: time="2025-03-19T11:51:41.980469009Z" level=info msg="StartContainer for \"83b12bbcccfda1383c775529601a46632d3e8c4e09996f59b348fc47462a7945\" returns successfully" Mar 19 11:51:42.339831 kubelet[2981]: I0319 11:51:42.339181 2981 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-a-361b280840" Mar 19 11:51:44.639089 kubelet[2981]: E0319 11:51:44.639041 2981 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.0-a-361b280840\" not found" node="ci-4230.1.0-a-361b280840" Mar 19 11:51:44.752541 kubelet[2981]: I0319 11:51:44.752485 2981 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.1.0-a-361b280840" Mar 19 11:51:45.092379 kubelet[2981]: E0319 11:51:45.092253 2981 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230.1.0-a-361b280840\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.1.0-a-361b280840" Mar 19 11:51:45.152309 kubelet[2981]: I0319 11:51:45.152173 2981 apiserver.go:52] "Watching apiserver" Mar 19 11:51:45.168050 kubelet[2981]: I0319 11:51:45.167981 2981 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 19 11:51:47.080360 systemd[1]: Reload requested from client PID 3254 ('systemctl') (unit session-9.scope)... Mar 19 11:51:47.080679 systemd[1]: Reloading... Mar 19 11:51:47.177748 zram_generator::config[3302]: No configuration found. Mar 19 11:51:47.292203 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:51:47.403594 systemd[1]: Reloading finished in 322 ms. Mar 19 11:51:47.425251 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:51:47.425586 kubelet[2981]: I0319 11:51:47.425362 2981 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:51:47.447182 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:51:47.447425 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:51:47.447486 systemd[1]: kubelet.service: Consumed 1.898s CPU time, 116.7M memory peak. Mar 19 11:51:47.455054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:51:47.554217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:51:47.567124 (kubelet)[3365]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:51:47.613817 kubelet[3365]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:51:47.613817 kubelet[3365]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 11:51:47.613817 kubelet[3365]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:51:47.614153 kubelet[3365]: I0319 11:51:47.613870 3365 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:51:47.623020 kubelet[3365]: I0319 11:51:47.622970 3365 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 19 11:51:47.623020 kubelet[3365]: I0319 11:51:47.623009 3365 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:51:47.623272 kubelet[3365]: I0319 11:51:47.623257 3365 server.go:929] "Client rotation is on, will bootstrap in background" Mar 19 11:51:47.625039 kubelet[3365]: I0319 11:51:47.624780 3365 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 19 11:51:47.627155 kubelet[3365]: I0319 11:51:47.627015 3365 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:51:47.630247 kubelet[3365]: E0319 11:51:47.630211 3365 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:51:47.630247 kubelet[3365]: I0319 11:51:47.630244 3365 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:51:47.633252 kubelet[3365]: I0319 11:51:47.633217 3365 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:51:47.633384 kubelet[3365]: I0319 11:51:47.633340 3365 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 19 11:51:47.637755 kubelet[3365]: I0319 11:51:47.633422 3365 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:51:47.637755 kubelet[3365]: I0319 11:51:47.633455 3365 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.0-a-361b280840","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:51:47.637755 kubelet[3365]: I0319 11:51:47.633632 3365 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:51:47.637755 kubelet[3365]: I0319 11:51:47.633641 3365 container_manager_linux.go:300] "Creating device plugin manager" Mar 19 11:51:47.637979 kubelet[3365]: I0319 11:51:47.633670 3365 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:51:47.637979 kubelet[3365]: I0319 11:51:47.633798 3365 kubelet.go:408] "Attempting to sync node with API server" Mar 19 11:51:47.637979 kubelet[3365]: I0319 11:51:47.633810 3365 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:51:47.637979 kubelet[3365]: I0319 11:51:47.633832 3365 kubelet.go:314] "Adding apiserver pod source" Mar 19 11:51:47.637979 kubelet[3365]: I0319 11:51:47.633840 3365 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:51:47.643729 kubelet[3365]: I0319 11:51:47.641517 3365 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:51:47.643729 kubelet[3365]: I0319 11:51:47.643127 3365 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:51:47.643729 kubelet[3365]: I0319 11:51:47.643704 3365 server.go:1269] "Started kubelet" Mar 19 11:51:47.653663 kubelet[3365]: I0319 11:51:47.653625 3365 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:51:47.659527 kubelet[3365]: I0319 11:51:47.659423 3365 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:51:47.661137 kubelet[3365]: I0319 11:51:47.660512 3365 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 19 11:51:47.661137 kubelet[3365]: E0319 11:51:47.660633 3365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-361b280840\" not found" Mar 19 11:51:47.663396 kubelet[3365]: I0319 11:51:47.663272 3365 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 19 11:51:47.665727 kubelet[3365]: I0319 11:51:47.665697 3365 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:51:47.667253 kubelet[3365]: I0319 11:51:47.667182 3365 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:51:47.669137 kubelet[3365]: I0319 11:51:47.668686 3365 server.go:460] "Adding debug handlers to kubelet server" Mar 19 11:51:47.669684 kubelet[3365]: I0319 11:51:47.669624 3365 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:51:47.669861 kubelet[3365]: I0319 11:51:47.669843 3365 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:51:47.685679 kubelet[3365]: I0319 11:51:47.685636 3365 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:51:47.687548 kubelet[3365]: I0319 11:51:47.687450 3365 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:51:47.689343 kubelet[3365]: E0319 11:51:47.689227 3365 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:51:47.690479 kubelet[3365]: I0319 11:51:47.690444 3365 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:51:47.694211 kubelet[3365]: I0319 11:51:47.693702 3365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:51:47.695658 kubelet[3365]: I0319 11:51:47.695330 3365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:51:47.695658 kubelet[3365]: I0319 11:51:47.695354 3365 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 11:51:47.695658 kubelet[3365]: I0319 11:51:47.695374 3365 kubelet.go:2321] "Starting kubelet main sync loop" Mar 19 11:51:47.695658 kubelet[3365]: E0319 11:51:47.695417 3365 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:51:47.752034 kubelet[3365]: I0319 11:51:47.752002 3365 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 19 11:51:47.752034 kubelet[3365]: I0319 11:51:47.752025 3365 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 19 11:51:47.752034 kubelet[3365]: I0319 11:51:47.752045 3365 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:51:47.752205 kubelet[3365]: I0319 11:51:47.752191 3365 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 19 11:51:47.752228 kubelet[3365]: I0319 11:51:47.752200 3365 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 19 11:51:47.752249 kubelet[3365]: I0319 11:51:47.752241 3365 policy_none.go:49] "None policy: Start" Mar 19 11:51:47.752914 kubelet[3365]: I0319 11:51:47.752892 3365 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 11:51:47.752977 kubelet[3365]: I0319 11:51:47.752921 3365 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:51:47.753105 kubelet[3365]: I0319 11:51:47.753090 3365 state_mem.go:75] "Updated machine memory state" Mar 19 11:51:47.757888 kubelet[3365]: I0319 11:51:47.757756 3365 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:51:47.758318 kubelet[3365]: I0319 11:51:47.758299 3365 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:51:47.758381 kubelet[3365]: I0319 11:51:47.758317 3365 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:51:47.758707 kubelet[3365]: I0319 11:51:47.758623 3365 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:51:47.805690 kubelet[3365]: W0319 11:51:47.805652 3365 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 19 11:51:47.810582 kubelet[3365]: W0319 11:51:47.810521 3365 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 19 11:51:47.810697 kubelet[3365]: W0319 11:51:47.810635 3365 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 19 11:51:47.865213 kubelet[3365]: I0319 11:51:47.864973 3365 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-a-361b280840" Mar 19 11:51:47.866870 kubelet[3365]: I0319 11:51:47.866837 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c4c20bd207d106ff3a37d45f08bafe9a-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.0-a-361b280840\" (UID: \"c4c20bd207d106ff3a37d45f08bafe9a\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-361b280840" Mar 19 11:51:47.866934 kubelet[3365]: I0319 11:51:47.866882 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4c20bd207d106ff3a37d45f08bafe9a-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.0-a-361b280840\" (UID: \"c4c20bd207d106ff3a37d45f08bafe9a\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-361b280840" Mar 19 11:51:47.866934 kubelet[3365]: I0319 11:51:47.866906 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4c20bd207d106ff3a37d45f08bafe9a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.0-a-361b280840\" (UID: \"c4c20bd207d106ff3a37d45f08bafe9a\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-361b280840" Mar 19 11:51:47.866934 kubelet[3365]: I0319 11:51:47.866929 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58d6137f1d8fb5c8cfe1457f4dfbe92b-k8s-certs\") pod \"kube-apiserver-ci-4230.1.0-a-361b280840\" (UID: \"58d6137f1d8fb5c8cfe1457f4dfbe92b\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-361b280840" Mar 19 11:51:47.867066 kubelet[3365]: I0319 11:51:47.866945 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58d6137f1d8fb5c8cfe1457f4dfbe92b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.0-a-361b280840\" (UID: \"58d6137f1d8fb5c8cfe1457f4dfbe92b\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-361b280840" Mar 19 11:51:47.867066 kubelet[3365]: I0319 11:51:47.866959 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4c20bd207d106ff3a37d45f08bafe9a-ca-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-361b280840\" (UID: \"c4c20bd207d106ff3a37d45f08bafe9a\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-361b280840" Mar 19 11:51:47.867066 kubelet[3365]: I0319 11:51:47.866973 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58d6137f1d8fb5c8cfe1457f4dfbe92b-ca-certs\") pod \"kube-apiserver-ci-4230.1.0-a-361b280840\" (UID: \"58d6137f1d8fb5c8cfe1457f4dfbe92b\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-361b280840" Mar 19 11:51:47.867066 kubelet[3365]: I0319 11:51:47.866987 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4c20bd207d106ff3a37d45f08bafe9a-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-361b280840\" (UID: \"c4c20bd207d106ff3a37d45f08bafe9a\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-361b280840" Mar 19 11:51:47.867066 kubelet[3365]: I0319 11:51:47.867003 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32adf2f70d4321d4ad1b1b15207b07a4-kubeconfig\") pod \"kube-scheduler-ci-4230.1.0-a-361b280840\" (UID: \"32adf2f70d4321d4ad1b1b15207b07a4\") " pod="kube-system/kube-scheduler-ci-4230.1.0-a-361b280840" Mar 19 11:51:47.880349 kubelet[3365]: I0319 11:51:47.880314 3365 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.1.0-a-361b280840" Mar 19 11:51:47.880482 kubelet[3365]: I0319 11:51:47.880407 3365 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.1.0-a-361b280840" Mar 19 11:51:48.101558 sudo[3398]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 19 11:51:48.101913 sudo[3398]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 19 11:51:48.565316 sudo[3398]: pam_unix(sudo:session): session closed for user root Mar 19 11:51:48.634758 kubelet[3365]: I0319 11:51:48.634493 3365 apiserver.go:52] "Watching apiserver" Mar 19 11:51:48.663987 kubelet[3365]: I0319 11:51:48.663932 3365 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 19 11:51:48.747264 kubelet[3365]: W0319 11:51:48.747227 3365 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 19 11:51:48.747403 kubelet[3365]: E0319 11:51:48.747297 3365 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.0-a-361b280840\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.0-a-361b280840" Mar 19 11:51:48.793083 kubelet[3365]: I0319 11:51:48.793017 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.0-a-361b280840" podStartSLOduration=1.792998464 podStartE2EDuration="1.792998464s" podCreationTimestamp="2025-03-19 11:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:51:48.770833669 +0000 UTC m=+1.200813696" watchObservedRunningTime="2025-03-19 11:51:48.792998464 +0000 UTC m=+1.222978491" Mar 19 11:51:48.814723 kubelet[3365]: I0319 11:51:48.813092 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.0-a-361b280840" podStartSLOduration=1.813064855 podStartE2EDuration="1.813064855s" podCreationTimestamp="2025-03-19 11:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:51:48.793825825 +0000 UTC m=+1.223805932" watchObservedRunningTime="2025-03-19 11:51:48.813064855 +0000 UTC m=+1.243044882" Mar 19 11:51:48.814723 kubelet[3365]: I0319 11:51:48.813232 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.0-a-361b280840" podStartSLOduration=1.8132273749999999 podStartE2EDuration="1.813227375s" podCreationTimestamp="2025-03-19 11:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:51:48.812908974 +0000 UTC m=+1.242889001" watchObservedRunningTime="2025-03-19 11:51:48.813227375 +0000 UTC m=+1.243207402" Mar 19 11:51:50.663595 sudo[2320]: pam_unix(sudo:session): session closed for user root Mar 19 11:51:50.741690 sshd[2319]: Connection closed by 10.200.16.10 port 53068 Mar 19 11:51:50.742291 sshd-session[2317]: pam_unix(sshd:session): session closed for user core Mar 19 11:51:50.745662 systemd[1]: sshd@6-10.200.20.11:22-10.200.16.10:53068.service: Deactivated successfully. Mar 19 11:51:50.747487 systemd[1]: session-9.scope: Deactivated successfully. Mar 19 11:51:50.747666 systemd[1]: session-9.scope: Consumed 7.570s CPU time, 256.4M memory peak. Mar 19 11:51:50.750017 systemd-logind[1713]: Session 9 logged out. Waiting for processes to exit. Mar 19 11:51:50.750902 systemd-logind[1713]: Removed session 9. Mar 19 11:51:52.541124 kubelet[3365]: I0319 11:51:52.541085 3365 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 19 11:51:52.541840 containerd[1740]: time="2025-03-19T11:51:52.541745976Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 19 11:51:52.542090 kubelet[3365]: I0319 11:51:52.541961 3365 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 19 11:51:53.417283 kubelet[3365]: W0319 11:51:53.417232 3365 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230.1.0-a-361b280840" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.1.0-a-361b280840' and this object Mar 19 11:51:53.417283 kubelet[3365]: E0319 11:51:53.417278 3365 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4230.1.0-a-361b280840\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.1.0-a-361b280840' and this object" logger="UnhandledError" Mar 19 11:51:53.424591 systemd[1]: Created slice kubepods-besteffort-pod8d2863a8_16d1_467a_86b5_db0a3f0495d1.slice - libcontainer container kubepods-besteffort-pod8d2863a8_16d1_467a_86b5_db0a3f0495d1.slice. Mar 19 11:51:53.438666 systemd[1]: Created slice kubepods-burstable-pode6f21c8a_30e6_4ac4_b3da_9d1ff91413f3.slice - libcontainer container kubepods-burstable-pode6f21c8a_30e6_4ac4_b3da_9d1ff91413f3.slice. Mar 19 11:51:53.501793 kubelet[3365]: I0319 11:51:53.501755 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-etc-cni-netd\") pod \"cilium-b68xg\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " pod="kube-system/cilium-b68xg" Mar 19 11:51:53.501999 kubelet[3365]: I0319 11:51:53.501986 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-cilium-config-path\") pod \"cilium-b68xg\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " pod="kube-system/cilium-b68xg" Mar 19 11:51:53.502105 kubelet[3365]: I0319 11:51:53.502093 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d2863a8-16d1-467a-86b5-db0a3f0495d1-lib-modules\") pod \"kube-proxy-wkxr6\" (UID: \"8d2863a8-16d1-467a-86b5-db0a3f0495d1\") " pod="kube-system/kube-proxy-wkxr6" Mar 19 11:51:53.502251 kubelet[3365]: I0319 11:51:53.502194 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-bpf-maps\") pod \"cilium-b68xg\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " pod="kube-system/cilium-b68xg" Mar 19 11:51:53.502251 kubelet[3365]: I0319 11:51:53.502214 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-hubble-tls\") pod \"cilium-b68xg\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " pod="kube-system/cilium-b68xg" Mar 19 11:51:53.502405 kubelet[3365]: I0319 11:51:53.502238 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k58h8\" (UniqueName: \"kubernetes.io/projected/8d2863a8-16d1-467a-86b5-db0a3f0495d1-kube-api-access-k58h8\") pod \"kube-proxy-wkxr6\" (UID: \"8d2863a8-16d1-467a-86b5-db0a3f0495d1\") " pod="kube-system/kube-proxy-wkxr6" Mar 19 11:51:53.502405 kubelet[3365]: I0319 11:51:53.502374 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-xtables-lock\") pod \"cilium-b68xg\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " pod="kube-system/cilium-b68xg" Mar 19 11:51:53.502749 kubelet[3365]: I0319 11:51:53.502392 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d2863a8-16d1-467a-86b5-db0a3f0495d1-xtables-lock\") pod \"kube-proxy-wkxr6\" (UID: \"8d2863a8-16d1-467a-86b5-db0a3f0495d1\") " pod="kube-system/kube-proxy-wkxr6" Mar 19 11:51:53.502749 kubelet[3365]: I0319 11:51:53.502493 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-cilium-cgroup\") pod \"cilium-b68xg\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " pod="kube-system/cilium-b68xg" Mar 19 11:51:53.502749 kubelet[3365]: I0319 11:51:53.502509 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-lib-modules\") pod \"cilium-b68xg\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " pod="kube-system/cilium-b68xg" Mar 19 11:51:53.502749 kubelet[3365]: I0319 11:51:53.502526 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-cilium-run\") pod \"cilium-b68xg\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " pod="kube-system/cilium-b68xg" Mar 19 11:51:53.502749 kubelet[3365]: I0319 11:51:53.502548 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-clustermesh-secrets\") pod \"cilium-b68xg\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " pod="kube-system/cilium-b68xg" Mar 19 11:51:53.502749 kubelet[3365]: I0319 11:51:53.502576 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8d2863a8-16d1-467a-86b5-db0a3f0495d1-kube-proxy\") pod \"kube-proxy-wkxr6\" (UID: \"8d2863a8-16d1-467a-86b5-db0a3f0495d1\") " pod="kube-system/kube-proxy-wkxr6" Mar 19 11:51:53.502905 kubelet[3365]: I0319 11:51:53.502591 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-host-proc-sys-net\") pod \"cilium-b68xg\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " pod="kube-system/cilium-b68xg" Mar 19 11:51:53.502905 kubelet[3365]: I0319 11:51:53.502605 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-host-proc-sys-kernel\") pod \"cilium-b68xg\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " pod="kube-system/cilium-b68xg" Mar 19 11:51:53.502905 kubelet[3365]: I0319 11:51:53.502619 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj44r\" (UniqueName: \"kubernetes.io/projected/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-kube-api-access-gj44r\") pod \"cilium-b68xg\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " pod="kube-system/cilium-b68xg" Mar 19 11:51:53.502905 kubelet[3365]: I0319 11:51:53.502635 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-hostproc\") pod \"cilium-b68xg\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " pod="kube-system/cilium-b68xg" Mar 19 11:51:53.502905 kubelet[3365]: I0319 11:51:53.502649 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-cni-path\") pod \"cilium-b68xg\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " pod="kube-system/cilium-b68xg" Mar 19 11:51:53.573238 systemd[1]: Created slice kubepods-besteffort-pod5a9412ec_7486_4091_9abe_7f4b62dcaabc.slice - libcontainer container kubepods-besteffort-pod5a9412ec_7486_4091_9abe_7f4b62dcaabc.slice. Mar 19 11:51:53.603736 kubelet[3365]: I0319 11:51:53.603126 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a9412ec-7486-4091-9abe-7f4b62dcaabc-cilium-config-path\") pod \"cilium-operator-5d85765b45-8sbtt\" (UID: \"5a9412ec-7486-4091-9abe-7f4b62dcaabc\") " pod="kube-system/cilium-operator-5d85765b45-8sbtt" Mar 19 11:51:53.603736 kubelet[3365]: I0319 11:51:53.603239 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddwnx\" (UniqueName: \"kubernetes.io/projected/5a9412ec-7486-4091-9abe-7f4b62dcaabc-kube-api-access-ddwnx\") pod \"cilium-operator-5d85765b45-8sbtt\" (UID: \"5a9412ec-7486-4091-9abe-7f4b62dcaabc\") " pod="kube-system/cilium-operator-5d85765b45-8sbtt" Mar 19 11:51:54.636094 containerd[1740]: time="2025-03-19T11:51:54.636048182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wkxr6,Uid:8d2863a8-16d1-467a-86b5-db0a3f0495d1,Namespace:kube-system,Attempt:0,}" Mar 19 11:51:54.645006 containerd[1740]: time="2025-03-19T11:51:54.644771876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b68xg,Uid:e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3,Namespace:kube-system,Attempt:0,}" Mar 19 11:51:54.699854 containerd[1740]: time="2025-03-19T11:51:54.699641285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:51:54.699854 containerd[1740]: time="2025-03-19T11:51:54.699694765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:51:54.699854 containerd[1740]: time="2025-03-19T11:51:54.699706805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:54.699854 containerd[1740]: time="2025-03-19T11:51:54.699805365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:54.724521 containerd[1740]: time="2025-03-19T11:51:54.722511162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:51:54.724521 containerd[1740]: time="2025-03-19T11:51:54.722564562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:51:54.724521 containerd[1740]: time="2025-03-19T11:51:54.722579082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:54.724521 containerd[1740]: time="2025-03-19T11:51:54.722649802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:54.727156 systemd[1]: run-containerd-runc-k8s.io-98d1bc2523441e3706fc6ed06dc815ffd228723c4d61d11e1c4e6a348779ed26-runc.LT2I1Q.mount: Deactivated successfully. Mar 19 11:51:54.737035 systemd[1]: Started cri-containerd-98d1bc2523441e3706fc6ed06dc815ffd228723c4d61d11e1c4e6a348779ed26.scope - libcontainer container 98d1bc2523441e3706fc6ed06dc815ffd228723c4d61d11e1c4e6a348779ed26. Mar 19 11:51:54.751913 systemd[1]: Started cri-containerd-e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e.scope - libcontainer container e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e. Mar 19 11:51:54.780606 containerd[1740]: time="2025-03-19T11:51:54.780567976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8sbtt,Uid:5a9412ec-7486-4091-9abe-7f4b62dcaabc,Namespace:kube-system,Attempt:0,}" Mar 19 11:51:54.783317 containerd[1740]: time="2025-03-19T11:51:54.783270340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wkxr6,Uid:8d2863a8-16d1-467a-86b5-db0a3f0495d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"98d1bc2523441e3706fc6ed06dc815ffd228723c4d61d11e1c4e6a348779ed26\"" Mar 19 11:51:54.788118 containerd[1740]: time="2025-03-19T11:51:54.788079108Z" level=info msg="CreateContainer within sandbox \"98d1bc2523441e3706fc6ed06dc815ffd228723c4d61d11e1c4e6a348779ed26\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 19 11:51:54.792394 containerd[1740]: time="2025-03-19T11:51:54.792306355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b68xg,Uid:e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\"" Mar 19 11:51:54.795003 containerd[1740]: time="2025-03-19T11:51:54.794968879Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 19 11:51:54.871386 containerd[1740]: time="2025-03-19T11:51:54.871177842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:51:54.871386 containerd[1740]: time="2025-03-19T11:51:54.871249162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:51:54.871386 containerd[1740]: time="2025-03-19T11:51:54.871264482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:54.871386 containerd[1740]: time="2025-03-19T11:51:54.871336842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:54.880193 containerd[1740]: time="2025-03-19T11:51:54.880057417Z" level=info msg="CreateContainer within sandbox \"98d1bc2523441e3706fc6ed06dc815ffd228723c4d61d11e1c4e6a348779ed26\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4574a089495ef7cde04270a4b1eece7d58914f07574506ab23adaff8a3ad6c4d\"" Mar 19 11:51:54.882581 containerd[1740]: time="2025-03-19T11:51:54.882332100Z" level=info msg="StartContainer for \"4574a089495ef7cde04270a4b1eece7d58914f07574506ab23adaff8a3ad6c4d\"" Mar 19 11:51:54.893088 systemd[1]: Started cri-containerd-cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e.scope - libcontainer container cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e. Mar 19 11:51:54.918034 systemd[1]: Started cri-containerd-4574a089495ef7cde04270a4b1eece7d58914f07574506ab23adaff8a3ad6c4d.scope - libcontainer container 4574a089495ef7cde04270a4b1eece7d58914f07574506ab23adaff8a3ad6c4d. Mar 19 11:51:54.941381 containerd[1740]: time="2025-03-19T11:51:54.941301356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8sbtt,Uid:5a9412ec-7486-4091-9abe-7f4b62dcaabc,Namespace:kube-system,Attempt:0,} returns sandbox id \"cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e\"" Mar 19 11:51:54.960972 containerd[1740]: time="2025-03-19T11:51:54.960931707Z" level=info msg="StartContainer for \"4574a089495ef7cde04270a4b1eece7d58914f07574506ab23adaff8a3ad6c4d\" returns successfully" Mar 19 11:51:57.245357 kubelet[3365]: I0319 11:51:57.245276 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wkxr6" podStartSLOduration=4.245236158 podStartE2EDuration="4.245236158s" podCreationTimestamp="2025-03-19 11:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:51:55.760290639 +0000 UTC m=+8.190270666" watchObservedRunningTime="2025-03-19 11:51:57.245236158 +0000 UTC m=+9.675216185" Mar 19 11:51:59.358483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1682538143.mount: Deactivated successfully. Mar 19 11:52:00.991631 containerd[1740]: time="2025-03-19T11:52:00.990762970Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:52:00.994459 containerd[1740]: time="2025-03-19T11:52:00.994405456Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 19 11:52:00.998518 containerd[1740]: time="2025-03-19T11:52:00.998449583Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:52:01.000213 containerd[1740]: time="2025-03-19T11:52:01.000094065Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.205089106s" Mar 19 11:52:01.000213 containerd[1740]: time="2025-03-19T11:52:01.000127785Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 19 11:52:01.003162 containerd[1740]: time="2025-03-19T11:52:01.002974790Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 19 11:52:01.004129 containerd[1740]: time="2025-03-19T11:52:01.003990392Z" level=info msg="CreateContainer within sandbox \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 19 11:52:01.050071 containerd[1740]: time="2025-03-19T11:52:01.049976906Z" level=info msg="CreateContainer within sandbox \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7\"" Mar 19 11:52:01.050581 containerd[1740]: time="2025-03-19T11:52:01.050548547Z" level=info msg="StartContainer for \"2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7\"" Mar 19 11:52:01.079946 systemd[1]: Started cri-containerd-2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7.scope - libcontainer container 2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7. Mar 19 11:52:01.109876 containerd[1740]: time="2025-03-19T11:52:01.109829163Z" level=info msg="StartContainer for \"2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7\" returns successfully" Mar 19 11:52:01.116202 systemd[1]: cri-containerd-2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7.scope: Deactivated successfully. Mar 19 11:52:02.036182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7-rootfs.mount: Deactivated successfully. Mar 19 11:52:02.838268 containerd[1740]: time="2025-03-19T11:52:02.838146085Z" level=info msg="shim disconnected" id=2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7 namespace=k8s.io Mar 19 11:52:02.838268 containerd[1740]: time="2025-03-19T11:52:02.838196045Z" level=warning msg="cleaning up after shim disconnected" id=2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7 namespace=k8s.io Mar 19 11:52:02.838268 containerd[1740]: time="2025-03-19T11:52:02.838203765Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:52:03.763302 containerd[1740]: time="2025-03-19T11:52:03.763208606Z" level=info msg="CreateContainer within sandbox \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 19 11:52:03.816962 containerd[1740]: time="2025-03-19T11:52:03.816828214Z" level=info msg="CreateContainer within sandbox \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936\"" Mar 19 11:52:03.818289 containerd[1740]: time="2025-03-19T11:52:03.818045616Z" level=info msg="StartContainer for \"04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936\"" Mar 19 11:52:03.845906 systemd[1]: Started cri-containerd-04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936.scope - libcontainer container 04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936. Mar 19 11:52:03.873691 containerd[1740]: time="2025-03-19T11:52:03.872664226Z" level=info msg="StartContainer for \"04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936\" returns successfully" Mar 19 11:52:03.881350 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:52:03.881569 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:52:03.881748 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:52:03.886955 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:52:03.889306 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 19 11:52:03.890436 systemd[1]: cri-containerd-04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936.scope: Deactivated successfully. Mar 19 11:52:03.907113 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:52:03.934387 containerd[1740]: time="2025-03-19T11:52:03.934195887Z" level=info msg="shim disconnected" id=04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936 namespace=k8s.io Mar 19 11:52:03.934387 containerd[1740]: time="2025-03-19T11:52:03.934245848Z" level=warning msg="cleaning up after shim disconnected" id=04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936 namespace=k8s.io Mar 19 11:52:03.934387 containerd[1740]: time="2025-03-19T11:52:03.934255008Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:52:04.768220 containerd[1740]: time="2025-03-19T11:52:04.768122339Z" level=info msg="CreateContainer within sandbox \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 19 11:52:04.803645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936-rootfs.mount: Deactivated successfully. Mar 19 11:52:04.845506 containerd[1740]: time="2025-03-19T11:52:04.845452546Z" level=info msg="CreateContainer within sandbox \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1\"" Mar 19 11:52:04.846744 containerd[1740]: time="2025-03-19T11:52:04.846632748Z" level=info msg="StartContainer for \"ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1\"" Mar 19 11:52:04.861879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4074345551.mount: Deactivated successfully. Mar 19 11:52:04.881929 systemd[1]: Started cri-containerd-ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1.scope - libcontainer container ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1. Mar 19 11:52:04.915292 systemd[1]: cri-containerd-ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1.scope: Deactivated successfully. Mar 19 11:52:04.921096 containerd[1740]: time="2025-03-19T11:52:04.921039950Z" level=info msg="StartContainer for \"ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1\" returns successfully" Mar 19 11:52:04.958054 containerd[1740]: time="2025-03-19T11:52:04.957921691Z" level=info msg="shim disconnected" id=ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1 namespace=k8s.io Mar 19 11:52:04.958054 containerd[1740]: time="2025-03-19T11:52:04.957993811Z" level=warning msg="cleaning up after shim disconnected" id=ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1 namespace=k8s.io Mar 19 11:52:04.958054 containerd[1740]: time="2025-03-19T11:52:04.958003211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:52:05.417498 containerd[1740]: time="2025-03-19T11:52:05.417439967Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:52:05.420746 containerd[1740]: time="2025-03-19T11:52:05.420580812Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 19 11:52:05.426258 containerd[1740]: time="2025-03-19T11:52:05.426202901Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:52:05.427749 containerd[1740]: time="2025-03-19T11:52:05.427572583Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.424563353s" Mar 19 11:52:05.427749 containerd[1740]: time="2025-03-19T11:52:05.427604623Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 19 11:52:05.430978 containerd[1740]: time="2025-03-19T11:52:05.430751149Z" level=info msg="CreateContainer within sandbox \"cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 19 11:52:05.472700 containerd[1740]: time="2025-03-19T11:52:05.472649378Z" level=info msg="CreateContainer within sandbox \"cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93\"" Mar 19 11:52:05.475449 containerd[1740]: time="2025-03-19T11:52:05.474142860Z" level=info msg="StartContainer for \"be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93\"" Mar 19 11:52:05.499946 systemd[1]: Started cri-containerd-be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93.scope - libcontainer container be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93. Mar 19 11:52:05.526746 containerd[1740]: time="2025-03-19T11:52:05.526494226Z" level=info msg="StartContainer for \"be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93\" returns successfully" Mar 19 11:52:05.777033 containerd[1740]: time="2025-03-19T11:52:05.776659917Z" level=info msg="CreateContainer within sandbox \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 19 11:52:05.826209 kubelet[3365]: I0319 11:52:05.826146 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-8sbtt" podStartSLOduration=2.340568332 podStartE2EDuration="12.826130359s" podCreationTimestamp="2025-03-19 11:51:53 +0000 UTC" firstStartedPulling="2025-03-19 11:51:54.942893638 +0000 UTC m=+7.372873665" lastFinishedPulling="2025-03-19 11:52:05.428455665 +0000 UTC m=+17.858435692" observedRunningTime="2025-03-19 11:52:05.791815382 +0000 UTC m=+18.221795489" watchObservedRunningTime="2025-03-19 11:52:05.826130359 +0000 UTC m=+18.256110346" Mar 19 11:52:05.836774 containerd[1740]: time="2025-03-19T11:52:05.836080935Z" level=info msg="CreateContainer within sandbox \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201\"" Mar 19 11:52:05.838767 containerd[1740]: time="2025-03-19T11:52:05.837901938Z" level=info msg="StartContainer for \"8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201\"" Mar 19 11:52:05.884837 systemd[1]: Started cri-containerd-8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201.scope - libcontainer container 8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201. Mar 19 11:52:05.943402 containerd[1740]: time="2025-03-19T11:52:05.943350632Z" level=info msg="StartContainer for \"8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201\" returns successfully" Mar 19 11:52:05.945917 systemd[1]: cri-containerd-8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201.scope: Deactivated successfully. Mar 19 11:52:05.977345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201-rootfs.mount: Deactivated successfully. Mar 19 11:52:06.248415 containerd[1740]: time="2025-03-19T11:52:06.247895932Z" level=info msg="shim disconnected" id=8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201 namespace=k8s.io Mar 19 11:52:06.248415 containerd[1740]: time="2025-03-19T11:52:06.247973413Z" level=warning msg="cleaning up after shim disconnected" id=8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201 namespace=k8s.io Mar 19 11:52:06.248415 containerd[1740]: time="2025-03-19T11:52:06.247984653Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:52:06.783379 containerd[1740]: time="2025-03-19T11:52:06.783317573Z" level=info msg="CreateContainer within sandbox \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 19 11:52:06.884662 containerd[1740]: time="2025-03-19T11:52:06.884586860Z" level=info msg="CreateContainer within sandbox \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2\"" Mar 19 11:52:06.885305 containerd[1740]: time="2025-03-19T11:52:06.885254901Z" level=info msg="StartContainer for \"fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2\"" Mar 19 11:52:06.911191 systemd[1]: run-containerd-runc-k8s.io-fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2-runc.tWuiDo.mount: Deactivated successfully. Mar 19 11:52:06.918900 systemd[1]: Started cri-containerd-fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2.scope - libcontainer container fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2. Mar 19 11:52:06.963081 containerd[1740]: time="2025-03-19T11:52:06.962959308Z" level=info msg="StartContainer for \"fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2\" returns successfully" Mar 19 11:52:07.138511 kubelet[3365]: I0319 11:52:07.138411 3365 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 19 11:52:07.180686 systemd[1]: Created slice kubepods-burstable-podadbe205a_f9b3_415c_9aed_6d476ba21ec5.slice - libcontainer container kubepods-burstable-podadbe205a_f9b3_415c_9aed_6d476ba21ec5.slice. Mar 19 11:52:07.197108 kubelet[3365]: I0319 11:52:07.197066 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7wz7\" (UniqueName: \"kubernetes.io/projected/adbe205a-f9b3-415c-9aed-6d476ba21ec5-kube-api-access-k7wz7\") pod \"coredns-6f6b679f8f-8njlm\" (UID: \"adbe205a-f9b3-415c-9aed-6d476ba21ec5\") " pod="kube-system/coredns-6f6b679f8f-8njlm" Mar 19 11:52:07.197238 kubelet[3365]: I0319 11:52:07.197111 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adbe205a-f9b3-415c-9aed-6d476ba21ec5-config-volume\") pod \"coredns-6f6b679f8f-8njlm\" (UID: \"adbe205a-f9b3-415c-9aed-6d476ba21ec5\") " pod="kube-system/coredns-6f6b679f8f-8njlm" Mar 19 11:52:07.198511 systemd[1]: Created slice kubepods-burstable-pod25628f11_6ff7_40e4_8074_f826a093cb37.slice - libcontainer container kubepods-burstable-pod25628f11_6ff7_40e4_8074_f826a093cb37.slice. Mar 19 11:52:07.298013 kubelet[3365]: I0319 11:52:07.297971 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25628f11-6ff7-40e4-8074-f826a093cb37-config-volume\") pod \"coredns-6f6b679f8f-wrqsc\" (UID: \"25628f11-6ff7-40e4-8074-f826a093cb37\") " pod="kube-system/coredns-6f6b679f8f-wrqsc" Mar 19 11:52:07.298159 kubelet[3365]: I0319 11:52:07.298039 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhtp2\" (UniqueName: \"kubernetes.io/projected/25628f11-6ff7-40e4-8074-f826a093cb37-kube-api-access-lhtp2\") pod \"coredns-6f6b679f8f-wrqsc\" (UID: \"25628f11-6ff7-40e4-8074-f826a093cb37\") " pod="kube-system/coredns-6f6b679f8f-wrqsc" Mar 19 11:52:07.491928 containerd[1740]: time="2025-03-19T11:52:07.491178857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8njlm,Uid:adbe205a-f9b3-415c-9aed-6d476ba21ec5,Namespace:kube-system,Attempt:0,}" Mar 19 11:52:07.503535 containerd[1740]: time="2025-03-19T11:52:07.503244237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wrqsc,Uid:25628f11-6ff7-40e4-8074-f826a093cb37,Namespace:kube-system,Attempt:0,}" Mar 19 11:52:07.811201 kubelet[3365]: I0319 11:52:07.809653 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b68xg" podStartSLOduration=8.601864071 podStartE2EDuration="14.809635381s" podCreationTimestamp="2025-03-19 11:51:53 +0000 UTC" firstStartedPulling="2025-03-19 11:51:54.793640757 +0000 UTC m=+7.223620784" lastFinishedPulling="2025-03-19 11:52:01.001412067 +0000 UTC m=+13.431392094" observedRunningTime="2025-03-19 11:52:07.809532581 +0000 UTC m=+20.239512608" watchObservedRunningTime="2025-03-19 11:52:07.809635381 +0000 UTC m=+20.239615408" Mar 19 11:52:09.166074 systemd-networkd[1342]: cilium_host: Link UP Mar 19 11:52:09.166213 systemd-networkd[1342]: cilium_net: Link UP Mar 19 11:52:09.166331 systemd-networkd[1342]: cilium_net: Gained carrier Mar 19 11:52:09.166464 systemd-networkd[1342]: cilium_host: Gained carrier Mar 19 11:52:09.338292 systemd-networkd[1342]: cilium_vxlan: Link UP Mar 19 11:52:09.338300 systemd-networkd[1342]: cilium_vxlan: Gained carrier Mar 19 11:52:09.540886 systemd-networkd[1342]: cilium_host: Gained IPv6LL Mar 19 11:52:09.672866 kernel: NET: Registered PF_ALG protocol family Mar 19 11:52:10.188862 systemd-networkd[1342]: cilium_net: Gained IPv6LL Mar 19 11:52:10.452751 systemd-networkd[1342]: lxc_health: Link UP Mar 19 11:52:10.452996 systemd-networkd[1342]: cilium_vxlan: Gained IPv6LL Mar 19 11:52:10.453136 systemd-networkd[1342]: lxc_health: Gained carrier Mar 19 11:52:10.582815 systemd-networkd[1342]: lxcce607f85eb1a: Link UP Mar 19 11:52:10.590889 kernel: eth0: renamed from tmp4d705 Mar 19 11:52:10.593770 systemd-networkd[1342]: lxcce607f85eb1a: Gained carrier Mar 19 11:52:10.611261 systemd-networkd[1342]: lxce8005b2106af: Link UP Mar 19 11:52:10.633746 kernel: eth0: renamed from tmpfb08b Mar 19 11:52:10.638923 systemd-networkd[1342]: lxce8005b2106af: Gained carrier Mar 19 11:52:11.596858 systemd-networkd[1342]: lxc_health: Gained IPv6LL Mar 19 11:52:11.916853 systemd-networkd[1342]: lxcce607f85eb1a: Gained IPv6LL Mar 19 11:52:12.556919 systemd-networkd[1342]: lxce8005b2106af: Gained IPv6LL Mar 19 11:52:14.270367 containerd[1740]: time="2025-03-19T11:52:14.270250021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:52:14.270367 containerd[1740]: time="2025-03-19T11:52:14.270326301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:52:14.274140 containerd[1740]: time="2025-03-19T11:52:14.273842105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:52:14.274140 containerd[1740]: time="2025-03-19T11:52:14.274028866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:52:14.304862 containerd[1740]: time="2025-03-19T11:52:14.300096939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:52:14.304862 containerd[1740]: time="2025-03-19T11:52:14.301071140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:52:14.304862 containerd[1740]: time="2025-03-19T11:52:14.301100140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:52:14.304862 containerd[1740]: time="2025-03-19T11:52:14.301266861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:52:14.308291 systemd[1]: Started cri-containerd-4d705d2ae3d8187df0afaad1ebe739b0512135bd77b17a0adcb680900ef916de.scope - libcontainer container 4d705d2ae3d8187df0afaad1ebe739b0512135bd77b17a0adcb680900ef916de. Mar 19 11:52:14.334888 systemd[1]: Started cri-containerd-fb08bfd2fb2930bff8b4f5ad8618ed91f5876cc1cde0ae0210fcad5ded70abb6.scope - libcontainer container fb08bfd2fb2930bff8b4f5ad8618ed91f5876cc1cde0ae0210fcad5ded70abb6. Mar 19 11:52:14.383402 containerd[1740]: time="2025-03-19T11:52:14.383221966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wrqsc,Uid:25628f11-6ff7-40e4-8074-f826a093cb37,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb08bfd2fb2930bff8b4f5ad8618ed91f5876cc1cde0ae0210fcad5ded70abb6\"" Mar 19 11:52:14.384641 containerd[1740]: time="2025-03-19T11:52:14.384618808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8njlm,Uid:adbe205a-f9b3-415c-9aed-6d476ba21ec5,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d705d2ae3d8187df0afaad1ebe739b0512135bd77b17a0adcb680900ef916de\"" Mar 19 11:52:14.388171 containerd[1740]: time="2025-03-19T11:52:14.388143012Z" level=info msg="CreateContainer within sandbox \"4d705d2ae3d8187df0afaad1ebe739b0512135bd77b17a0adcb680900ef916de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:52:14.389728 containerd[1740]: time="2025-03-19T11:52:14.388969974Z" level=info msg="CreateContainer within sandbox \"fb08bfd2fb2930bff8b4f5ad8618ed91f5876cc1cde0ae0210fcad5ded70abb6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:52:14.445218 containerd[1740]: time="2025-03-19T11:52:14.445158646Z" level=info msg="CreateContainer within sandbox \"4d705d2ae3d8187df0afaad1ebe739b0512135bd77b17a0adcb680900ef916de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a566bef408b10af57d88b78b3cc465697b2fcd47e21d052037e53c6eedb7a902\"" Mar 19 11:52:14.446650 containerd[1740]: time="2025-03-19T11:52:14.445886567Z" level=info msg="StartContainer for \"a566bef408b10af57d88b78b3cc465697b2fcd47e21d052037e53c6eedb7a902\"" Mar 19 11:52:14.450938 containerd[1740]: time="2025-03-19T11:52:14.450908213Z" level=info msg="CreateContainer within sandbox \"fb08bfd2fb2930bff8b4f5ad8618ed91f5876cc1cde0ae0210fcad5ded70abb6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a32b5a49defd187fe5fa50cba168e4b49ed64df06f2e413a3fbf4499c528d94f\"" Mar 19 11:52:14.452018 containerd[1740]: time="2025-03-19T11:52:14.451996095Z" level=info msg="StartContainer for \"a32b5a49defd187fe5fa50cba168e4b49ed64df06f2e413a3fbf4499c528d94f\"" Mar 19 11:52:14.476531 systemd[1]: Started cri-containerd-a566bef408b10af57d88b78b3cc465697b2fcd47e21d052037e53c6eedb7a902.scope - libcontainer container a566bef408b10af57d88b78b3cc465697b2fcd47e21d052037e53c6eedb7a902. Mar 19 11:52:14.482871 systemd[1]: Started cri-containerd-a32b5a49defd187fe5fa50cba168e4b49ed64df06f2e413a3fbf4499c528d94f.scope - libcontainer container a32b5a49defd187fe5fa50cba168e4b49ed64df06f2e413a3fbf4499c528d94f. Mar 19 11:52:14.508568 containerd[1740]: time="2025-03-19T11:52:14.508494407Z" level=info msg="StartContainer for \"a566bef408b10af57d88b78b3cc465697b2fcd47e21d052037e53c6eedb7a902\" returns successfully" Mar 19 11:52:14.533105 containerd[1740]: time="2025-03-19T11:52:14.532898919Z" level=info msg="StartContainer for \"a32b5a49defd187fe5fa50cba168e4b49ed64df06f2e413a3fbf4499c528d94f\" returns successfully" Mar 19 11:52:14.839218 kubelet[3365]: I0319 11:52:14.839085 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wrqsc" podStartSLOduration=21.839070753 podStartE2EDuration="21.839070753s" podCreationTimestamp="2025-03-19 11:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:52:14.816889524 +0000 UTC m=+27.246869551" watchObservedRunningTime="2025-03-19 11:52:14.839070753 +0000 UTC m=+27.269050780" Mar 19 11:52:14.860564 kubelet[3365]: I0319 11:52:14.860502 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-8njlm" podStartSLOduration=21.86048574 podStartE2EDuration="21.86048574s" podCreationTimestamp="2025-03-19 11:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:52:14.8604307 +0000 UTC m=+27.290410727" watchObservedRunningTime="2025-03-19 11:52:14.86048574 +0000 UTC m=+27.290465807" Mar 19 11:52:35.154730 update_engine[1718]: I20250319 11:52:35.154238 1718 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 19 11:52:35.154730 update_engine[1718]: I20250319 11:52:35.154285 1718 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 19 11:52:35.154730 update_engine[1718]: I20250319 11:52:35.154438 1718 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 19 11:52:35.155191 update_engine[1718]: I20250319 11:52:35.154812 1718 omaha_request_params.cc:62] Current group set to beta Mar 19 11:52:35.155191 update_engine[1718]: I20250319 11:52:35.154899 1718 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 19 11:52:35.155191 update_engine[1718]: I20250319 11:52:35.154909 1718 update_attempter.cc:643] Scheduling an action processor start. Mar 19 11:52:35.155191 update_engine[1718]: I20250319 11:52:35.154924 1718 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 19 11:52:35.155191 update_engine[1718]: I20250319 11:52:35.154950 1718 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 19 11:52:35.155191 update_engine[1718]: I20250319 11:52:35.154990 1718 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 19 11:52:35.155191 update_engine[1718]: I20250319 11:52:35.154996 1718 omaha_request_action.cc:272] Request: Mar 19 11:52:35.155191 update_engine[1718]: Mar 19 11:52:35.155191 update_engine[1718]: Mar 19 11:52:35.155191 update_engine[1718]: Mar 19 11:52:35.155191 update_engine[1718]: Mar 19 11:52:35.155191 update_engine[1718]: Mar 19 11:52:35.155191 update_engine[1718]: Mar 19 11:52:35.155191 update_engine[1718]: Mar 19 11:52:35.155191 update_engine[1718]: Mar 19 11:52:35.155191 update_engine[1718]: I20250319 11:52:35.155001 1718 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 19 11:52:35.155828 locksmithd[1785]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 19 11:52:35.156123 update_engine[1718]: I20250319 11:52:35.156087 1718 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 19 11:52:35.156459 update_engine[1718]: I20250319 11:52:35.156428 1718 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 19 11:52:35.212862 update_engine[1718]: E20250319 11:52:35.212808 1718 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 19 11:52:35.212983 update_engine[1718]: I20250319 11:52:35.212908 1718 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 19 11:52:40.232751 waagent[1949]: 2025-03-19T11:52:40.232622Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Mar 19 11:52:40.242843 waagent[1949]: 2025-03-19T11:52:40.242791Z INFO ExtHandler Mar 19 11:52:40.242961 waagent[1949]: 2025-03-19T11:52:40.242924Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 9970473c-e12d-4be8-a12c-52f67294d163 eTag: 16163278885058356122 source: Fabric] Mar 19 11:52:40.243331 waagent[1949]: 2025-03-19T11:52:40.243288Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 19 11:52:40.243936 waagent[1949]: 2025-03-19T11:52:40.243888Z INFO ExtHandler Mar 19 11:52:40.244004 waagent[1949]: 2025-03-19T11:52:40.243975Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Mar 19 11:52:40.331987 waagent[1949]: 2025-03-19T11:52:40.331943Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 19 11:52:40.421673 waagent[1949]: 2025-03-19T11:52:40.421578Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C2DAB7AA55C463EFB5E2595C01E920A1D3307733', 'hasPrivateKey': True} Mar 19 11:52:40.422081 waagent[1949]: 2025-03-19T11:52:40.422035Z INFO ExtHandler Downloaded certificate {'thumbprint': '505341C5D144B07F7F9B922454327DCE99A38649', 'hasPrivateKey': False} Mar 19 11:52:40.422461 waagent[1949]: 2025-03-19T11:52:40.422421Z INFO ExtHandler Fetch goal state completed Mar 19 11:52:40.422810 waagent[1949]: 2025-03-19T11:52:40.422772Z INFO ExtHandler ExtHandler Mar 19 11:52:40.422882 waagent[1949]: 2025-03-19T11:52:40.422851Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: b31196af-7022-445e-8f68-e1e4b746e3b3 correlation 0ad86746-6a24-4305-8d87-a65cd761914f created: 2025-03-19T11:52:27.524246Z] Mar 19 11:52:40.423236 waagent[1949]: 2025-03-19T11:52:40.423192Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 19 11:52:40.423873 waagent[1949]: 2025-03-19T11:52:40.423832Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 1 ms] Mar 19 11:52:45.136293 update_engine[1718]: I20250319 11:52:45.135771 1718 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 19 11:52:45.136293 update_engine[1718]: I20250319 11:52:45.136004 1718 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 19 11:52:45.136293 update_engine[1718]: I20250319 11:52:45.136243 1718 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 19 11:52:45.212405 update_engine[1718]: E20250319 11:52:45.212287 1718 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 19 11:52:45.212405 update_engine[1718]: I20250319 11:52:45.212376 1718 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 19 11:52:55.144616 update_engine[1718]: I20250319 11:52:55.144122 1718 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 19 11:52:55.144616 update_engine[1718]: I20250319 11:52:55.144345 1718 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 19 11:52:55.144616 update_engine[1718]: I20250319 11:52:55.144577 1718 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 19 11:52:55.193330 update_engine[1718]: E20250319 11:52:55.193221 1718 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 19 11:52:55.193330 update_engine[1718]: I20250319 11:52:55.193305 1718 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 19 11:53:05.143754 update_engine[1718]: I20250319 11:53:05.143286 1718 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 19 11:53:05.143754 update_engine[1718]: I20250319 11:53:05.143561 1718 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 19 11:53:05.144119 update_engine[1718]: I20250319 11:53:05.143829 1718 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 19 11:53:05.269772 update_engine[1718]: E20250319 11:53:05.269629 1718 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 19 11:53:05.269967 update_engine[1718]: I20250319 11:53:05.269781 1718 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 19 11:53:05.269967 update_engine[1718]: I20250319 11:53:05.269796 1718 omaha_request_action.cc:617] Omaha request response: Mar 19 11:53:05.269967 update_engine[1718]: E20250319 11:53:05.269893 1718 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 19 11:53:05.269967 update_engine[1718]: I20250319 11:53:05.269910 1718 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 19 11:53:05.269967 update_engine[1718]: I20250319 11:53:05.269915 1718 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 19 11:53:05.269967 update_engine[1718]: I20250319 11:53:05.269919 1718 update_attempter.cc:306] Processing Done. Mar 19 11:53:05.269967 update_engine[1718]: E20250319 11:53:05.269934 1718 update_attempter.cc:619] Update failed. Mar 19 11:53:05.269967 update_engine[1718]: I20250319 11:53:05.269939 1718 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 19 11:53:05.269967 update_engine[1718]: I20250319 11:53:05.269944 1718 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 19 11:53:05.269967 update_engine[1718]: I20250319 11:53:05.269949 1718 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 19 11:53:05.270163 update_engine[1718]: I20250319 11:53:05.270022 1718 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 19 11:53:05.270163 update_engine[1718]: I20250319 11:53:05.270043 1718 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 19 11:53:05.270163 update_engine[1718]: I20250319 11:53:05.270048 1718 omaha_request_action.cc:272] Request: Mar 19 11:53:05.270163 update_engine[1718]: Mar 19 11:53:05.270163 update_engine[1718]: Mar 19 11:53:05.270163 update_engine[1718]: Mar 19 11:53:05.270163 update_engine[1718]: Mar 19 11:53:05.270163 update_engine[1718]: Mar 19 11:53:05.270163 update_engine[1718]: Mar 19 11:53:05.270163 update_engine[1718]: I20250319 11:53:05.270053 1718 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 19 11:53:05.270329 update_engine[1718]: I20250319 11:53:05.270190 1718 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 19 11:53:05.270506 update_engine[1718]: I20250319 11:53:05.270407 1718 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 19 11:53:05.270644 locksmithd[1785]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 19 11:53:05.284582 update_engine[1718]: E20250319 11:53:05.284523 1718 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 19 11:53:05.284664 update_engine[1718]: I20250319 11:53:05.284615 1718 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 19 11:53:05.284664 update_engine[1718]: I20250319 11:53:05.284624 1718 omaha_request_action.cc:617] Omaha request response: Mar 19 11:53:05.284664 update_engine[1718]: I20250319 11:53:05.284631 1718 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 19 11:53:05.284664 update_engine[1718]: I20250319 11:53:05.284636 1718 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 19 11:53:05.284664 update_engine[1718]: I20250319 11:53:05.284641 1718 update_attempter.cc:306] Processing Done. Mar 19 11:53:05.284664 update_engine[1718]: I20250319 11:53:05.284646 1718 update_attempter.cc:310] Error event sent. Mar 19 11:53:05.284664 update_engine[1718]: I20250319 11:53:05.284655 1718 update_check_scheduler.cc:74] Next update check in 47m52s Mar 19 11:53:05.284968 locksmithd[1785]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 19 11:53:29.097115 systemd[1]: Started sshd@7-10.200.20.11:22-10.200.16.10:35586.service - OpenSSH per-connection server daemon (10.200.16.10:35586). Mar 19 11:53:29.540542 sshd[4756]: Accepted publickey for core from 10.200.16.10 port 35586 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:53:29.541845 sshd-session[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:53:29.547569 systemd-logind[1713]: New session 10 of user core. Mar 19 11:53:29.550925 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 19 11:53:30.047745 sshd[4758]: Connection closed by 10.200.16.10 port 35586 Mar 19 11:53:30.048291 sshd-session[4756]: pam_unix(sshd:session): session closed for user core Mar 19 11:53:30.050799 systemd[1]: sshd@7-10.200.20.11:22-10.200.16.10:35586.service: Deactivated successfully. Mar 19 11:53:30.053435 systemd[1]: session-10.scope: Deactivated successfully. Mar 19 11:53:30.054817 systemd-logind[1713]: Session 10 logged out. Waiting for processes to exit. Mar 19 11:53:30.056035 systemd-logind[1713]: Removed session 10. Mar 19 11:53:35.133951 systemd[1]: Started sshd@8-10.200.20.11:22-10.200.16.10:35598.service - OpenSSH per-connection server daemon (10.200.16.10:35598). Mar 19 11:53:35.576078 sshd[4771]: Accepted publickey for core from 10.200.16.10 port 35598 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:53:35.577803 sshd-session[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:53:35.582377 systemd-logind[1713]: New session 11 of user core. Mar 19 11:53:35.588860 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 19 11:53:35.959339 sshd[4773]: Connection closed by 10.200.16.10 port 35598 Mar 19 11:53:35.959863 sshd-session[4771]: pam_unix(sshd:session): session closed for user core Mar 19 11:53:35.963649 systemd[1]: sshd@8-10.200.20.11:22-10.200.16.10:35598.service: Deactivated successfully. Mar 19 11:53:35.965996 systemd[1]: session-11.scope: Deactivated successfully. Mar 19 11:53:35.967050 systemd-logind[1713]: Session 11 logged out. Waiting for processes to exit. Mar 19 11:53:35.968044 systemd-logind[1713]: Removed session 11. Mar 19 11:53:41.045948 systemd[1]: Started sshd@9-10.200.20.11:22-10.200.16.10:33080.service - OpenSSH per-connection server daemon (10.200.16.10:33080). Mar 19 11:53:41.489064 sshd[4785]: Accepted publickey for core from 10.200.16.10 port 33080 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:53:41.490230 sshd-session[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:53:41.494797 systemd-logind[1713]: New session 12 of user core. Mar 19 11:53:41.501871 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 19 11:53:41.871143 sshd[4787]: Connection closed by 10.200.16.10 port 33080 Mar 19 11:53:41.871537 sshd-session[4785]: pam_unix(sshd:session): session closed for user core Mar 19 11:53:41.874994 systemd[1]: sshd@9-10.200.20.11:22-10.200.16.10:33080.service: Deactivated successfully. Mar 19 11:53:41.877671 systemd[1]: session-12.scope: Deactivated successfully. Mar 19 11:53:41.878997 systemd-logind[1713]: Session 12 logged out. Waiting for processes to exit. Mar 19 11:53:41.880082 systemd-logind[1713]: Removed session 12. Mar 19 11:53:46.957080 systemd[1]: Started sshd@10-10.200.20.11:22-10.200.16.10:33096.service - OpenSSH per-connection server daemon (10.200.16.10:33096). Mar 19 11:53:47.402171 sshd[4802]: Accepted publickey for core from 10.200.16.10 port 33096 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:53:47.403406 sshd-session[4802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:53:47.408239 systemd-logind[1713]: New session 13 of user core. Mar 19 11:53:47.413858 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 19 11:53:47.798790 sshd[4804]: Connection closed by 10.200.16.10 port 33096 Mar 19 11:53:47.798687 sshd-session[4802]: pam_unix(sshd:session): session closed for user core Mar 19 11:53:47.801819 systemd[1]: sshd@10-10.200.20.11:22-10.200.16.10:33096.service: Deactivated successfully. Mar 19 11:53:47.803389 systemd[1]: session-13.scope: Deactivated successfully. Mar 19 11:53:47.804593 systemd-logind[1713]: Session 13 logged out. Waiting for processes to exit. Mar 19 11:53:47.805651 systemd-logind[1713]: Removed session 13. Mar 19 11:53:47.887001 systemd[1]: Started sshd@11-10.200.20.11:22-10.200.16.10:33112.service - OpenSSH per-connection server daemon (10.200.16.10:33112). Mar 19 11:53:48.370937 sshd[4819]: Accepted publickey for core from 10.200.16.10 port 33112 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:53:48.372508 sshd-session[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:53:48.376959 systemd-logind[1713]: New session 14 of user core. Mar 19 11:53:48.380853 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 19 11:53:48.813917 sshd[4821]: Connection closed by 10.200.16.10 port 33112 Mar 19 11:53:48.814454 sshd-session[4819]: pam_unix(sshd:session): session closed for user core Mar 19 11:53:48.818225 systemd-logind[1713]: Session 14 logged out. Waiting for processes to exit. Mar 19 11:53:48.818840 systemd[1]: sshd@11-10.200.20.11:22-10.200.16.10:33112.service: Deactivated successfully. Mar 19 11:53:48.821137 systemd[1]: session-14.scope: Deactivated successfully. Mar 19 11:53:48.821977 systemd-logind[1713]: Removed session 14. Mar 19 11:53:48.901409 systemd[1]: Started sshd@12-10.200.20.11:22-10.200.16.10:45808.service - OpenSSH per-connection server daemon (10.200.16.10:45808). Mar 19 11:53:49.385538 sshd[4831]: Accepted publickey for core from 10.200.16.10 port 45808 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:53:49.386795 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:53:49.391878 systemd-logind[1713]: New session 15 of user core. Mar 19 11:53:49.398866 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 19 11:53:49.792741 sshd[4833]: Connection closed by 10.200.16.10 port 45808 Mar 19 11:53:49.792284 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Mar 19 11:53:49.795445 systemd-logind[1713]: Session 15 logged out. Waiting for processes to exit. Mar 19 11:53:49.796017 systemd[1]: sshd@12-10.200.20.11:22-10.200.16.10:45808.service: Deactivated successfully. Mar 19 11:53:49.797836 systemd[1]: session-15.scope: Deactivated successfully. Mar 19 11:53:49.799508 systemd-logind[1713]: Removed session 15. Mar 19 11:53:54.887960 systemd[1]: Started sshd@13-10.200.20.11:22-10.200.16.10:45824.service - OpenSSH per-connection server daemon (10.200.16.10:45824). Mar 19 11:53:55.368092 sshd[4845]: Accepted publickey for core from 10.200.16.10 port 45824 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:53:55.369341 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:53:55.373926 systemd-logind[1713]: New session 16 of user core. Mar 19 11:53:55.379874 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 19 11:53:55.780406 sshd[4849]: Connection closed by 10.200.16.10 port 45824 Mar 19 11:53:55.780988 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Mar 19 11:53:55.784257 systemd-logind[1713]: Session 16 logged out. Waiting for processes to exit. Mar 19 11:53:55.784941 systemd[1]: sshd@13-10.200.20.11:22-10.200.16.10:45824.service: Deactivated successfully. Mar 19 11:53:55.787138 systemd[1]: session-16.scope: Deactivated successfully. Mar 19 11:53:55.789186 systemd-logind[1713]: Removed session 16. Mar 19 11:54:00.868818 systemd[1]: Started sshd@14-10.200.20.11:22-10.200.16.10:45608.service - OpenSSH per-connection server daemon (10.200.16.10:45608). Mar 19 11:54:01.353826 sshd[4861]: Accepted publickey for core from 10.200.16.10 port 45608 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:54:01.355168 sshd-session[4861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:54:01.360347 systemd-logind[1713]: New session 17 of user core. Mar 19 11:54:01.368933 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 19 11:54:01.763056 sshd[4863]: Connection closed by 10.200.16.10 port 45608 Mar 19 11:54:01.763799 sshd-session[4861]: pam_unix(sshd:session): session closed for user core Mar 19 11:54:01.767664 systemd[1]: sshd@14-10.200.20.11:22-10.200.16.10:45608.service: Deactivated successfully. Mar 19 11:54:01.770549 systemd[1]: session-17.scope: Deactivated successfully. Mar 19 11:54:01.771471 systemd-logind[1713]: Session 17 logged out. Waiting for processes to exit. Mar 19 11:54:01.772646 systemd-logind[1713]: Removed session 17. Mar 19 11:54:01.854012 systemd[1]: Started sshd@15-10.200.20.11:22-10.200.16.10:45612.service - OpenSSH per-connection server daemon (10.200.16.10:45612). Mar 19 11:54:02.299557 sshd[4874]: Accepted publickey for core from 10.200.16.10 port 45612 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:54:02.300910 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:54:02.306692 systemd-logind[1713]: New session 18 of user core. Mar 19 11:54:02.312953 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 19 11:54:02.762841 sshd[4876]: Connection closed by 10.200.16.10 port 45612 Mar 19 11:54:02.763524 sshd-session[4874]: pam_unix(sshd:session): session closed for user core Mar 19 11:54:02.767177 systemd[1]: sshd@15-10.200.20.11:22-10.200.16.10:45612.service: Deactivated successfully. Mar 19 11:54:02.769133 systemd[1]: session-18.scope: Deactivated successfully. Mar 19 11:54:02.770069 systemd-logind[1713]: Session 18 logged out. Waiting for processes to exit. Mar 19 11:54:02.771755 systemd-logind[1713]: Removed session 18. Mar 19 11:54:02.850027 systemd[1]: Started sshd@16-10.200.20.11:22-10.200.16.10:45614.service - OpenSSH per-connection server daemon (10.200.16.10:45614). Mar 19 11:54:03.294117 sshd[4886]: Accepted publickey for core from 10.200.16.10 port 45614 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:54:03.295512 sshd-session[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:54:03.300098 systemd-logind[1713]: New session 19 of user core. Mar 19 11:54:03.307900 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 19 11:54:05.117378 sshd[4888]: Connection closed by 10.200.16.10 port 45614 Mar 19 11:54:05.119313 sshd-session[4886]: pam_unix(sshd:session): session closed for user core Mar 19 11:54:05.122969 systemd[1]: sshd@16-10.200.20.11:22-10.200.16.10:45614.service: Deactivated successfully. Mar 19 11:54:05.125058 systemd[1]: session-19.scope: Deactivated successfully. Mar 19 11:54:05.125998 systemd-logind[1713]: Session 19 logged out. Waiting for processes to exit. Mar 19 11:54:05.126943 systemd-logind[1713]: Removed session 19. Mar 19 11:54:05.215996 systemd[1]: Started sshd@17-10.200.20.11:22-10.200.16.10:45622.service - OpenSSH per-connection server daemon (10.200.16.10:45622). Mar 19 11:54:05.700749 sshd[4905]: Accepted publickey for core from 10.200.16.10 port 45622 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:54:05.700653 sshd-session[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:54:05.707784 systemd-logind[1713]: New session 20 of user core. Mar 19 11:54:05.710916 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 19 11:54:06.218862 sshd[4907]: Connection closed by 10.200.16.10 port 45622 Mar 19 11:54:06.219464 sshd-session[4905]: pam_unix(sshd:session): session closed for user core Mar 19 11:54:06.222902 systemd[1]: sshd@17-10.200.20.11:22-10.200.16.10:45622.service: Deactivated successfully. Mar 19 11:54:06.225118 systemd[1]: session-20.scope: Deactivated successfully. Mar 19 11:54:06.226020 systemd-logind[1713]: Session 20 logged out. Waiting for processes to exit. Mar 19 11:54:06.227010 systemd-logind[1713]: Removed session 20. Mar 19 11:54:06.305009 systemd[1]: Started sshd@18-10.200.20.11:22-10.200.16.10:45628.service - OpenSSH per-connection server daemon (10.200.16.10:45628). Mar 19 11:54:06.750705 sshd[4916]: Accepted publickey for core from 10.200.16.10 port 45628 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:54:06.752018 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:54:06.756455 systemd-logind[1713]: New session 21 of user core. Mar 19 11:54:06.762932 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 19 11:54:07.131148 sshd[4918]: Connection closed by 10.200.16.10 port 45628 Mar 19 11:54:07.131685 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Mar 19 11:54:07.135244 systemd[1]: sshd@18-10.200.20.11:22-10.200.16.10:45628.service: Deactivated successfully. Mar 19 11:54:07.138542 systemd[1]: session-21.scope: Deactivated successfully. Mar 19 11:54:07.139257 systemd-logind[1713]: Session 21 logged out. Waiting for processes to exit. Mar 19 11:54:07.140284 systemd-logind[1713]: Removed session 21. Mar 19 11:54:12.217603 systemd[1]: Started sshd@19-10.200.20.11:22-10.200.16.10:51074.service - OpenSSH per-connection server daemon (10.200.16.10:51074). Mar 19 11:54:12.665529 sshd[4933]: Accepted publickey for core from 10.200.16.10 port 51074 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:54:12.666872 sshd-session[4933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:54:12.670991 systemd-logind[1713]: New session 22 of user core. Mar 19 11:54:12.675920 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 19 11:54:13.047632 sshd[4935]: Connection closed by 10.200.16.10 port 51074 Mar 19 11:54:13.048212 sshd-session[4933]: pam_unix(sshd:session): session closed for user core Mar 19 11:54:13.051560 systemd-logind[1713]: Session 22 logged out. Waiting for processes to exit. Mar 19 11:54:13.052268 systemd[1]: sshd@19-10.200.20.11:22-10.200.16.10:51074.service: Deactivated successfully. Mar 19 11:54:13.054258 systemd[1]: session-22.scope: Deactivated successfully. Mar 19 11:54:13.055497 systemd-logind[1713]: Removed session 22. Mar 19 11:54:18.137152 systemd[1]: Started sshd@20-10.200.20.11:22-10.200.16.10:51088.service - OpenSSH per-connection server daemon (10.200.16.10:51088). Mar 19 11:54:18.581493 sshd[4946]: Accepted publickey for core from 10.200.16.10 port 51088 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:54:18.582822 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:54:18.587914 systemd-logind[1713]: New session 23 of user core. Mar 19 11:54:18.594918 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 19 11:54:18.965841 sshd[4948]: Connection closed by 10.200.16.10 port 51088 Mar 19 11:54:18.966380 sshd-session[4946]: pam_unix(sshd:session): session closed for user core Mar 19 11:54:18.969393 systemd[1]: sshd@20-10.200.20.11:22-10.200.16.10:51088.service: Deactivated successfully. Mar 19 11:54:18.972257 systemd[1]: session-23.scope: Deactivated successfully. Mar 19 11:54:18.974221 systemd-logind[1713]: Session 23 logged out. Waiting for processes to exit. Mar 19 11:54:18.975182 systemd-logind[1713]: Removed session 23. Mar 19 11:54:24.057979 systemd[1]: Started sshd@21-10.200.20.11:22-10.200.16.10:40172.service - OpenSSH per-connection server daemon (10.200.16.10:40172). Mar 19 11:54:24.539337 sshd[4960]: Accepted publickey for core from 10.200.16.10 port 40172 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:54:24.540571 sshd-session[4960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:54:24.545803 systemd-logind[1713]: New session 24 of user core. Mar 19 11:54:24.550963 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 19 11:54:24.943217 sshd[4962]: Connection closed by 10.200.16.10 port 40172 Mar 19 11:54:24.943786 sshd-session[4960]: pam_unix(sshd:session): session closed for user core Mar 19 11:54:24.947496 systemd[1]: sshd@21-10.200.20.11:22-10.200.16.10:40172.service: Deactivated successfully. Mar 19 11:54:24.949625 systemd[1]: session-24.scope: Deactivated successfully. Mar 19 11:54:24.950362 systemd-logind[1713]: Session 24 logged out. Waiting for processes to exit. Mar 19 11:54:24.951560 systemd-logind[1713]: Removed session 24. Mar 19 11:54:25.036012 systemd[1]: Started sshd@22-10.200.20.11:22-10.200.16.10:40186.service - OpenSSH per-connection server daemon (10.200.16.10:40186). Mar 19 11:54:25.479591 sshd[4974]: Accepted publickey for core from 10.200.16.10 port 40186 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:54:25.480968 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:54:25.485899 systemd-logind[1713]: New session 25 of user core. Mar 19 11:54:25.490883 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 19 11:54:27.449250 containerd[1740]: time="2025-03-19T11:54:27.447866944Z" level=info msg="StopContainer for \"be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93\" with timeout 30 (s)" Mar 19 11:54:27.449900 containerd[1740]: time="2025-03-19T11:54:27.449800547Z" level=info msg="Stop container \"be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93\" with signal terminated" Mar 19 11:54:27.455695 containerd[1740]: time="2025-03-19T11:54:27.455643475Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:54:27.461668 systemd[1]: cri-containerd-be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93.scope: Deactivated successfully. Mar 19 11:54:27.469742 containerd[1740]: time="2025-03-19T11:54:27.469110375Z" level=info msg="StopContainer for \"fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2\" with timeout 2 (s)" Mar 19 11:54:27.469742 containerd[1740]: time="2025-03-19T11:54:27.469561336Z" level=info msg="Stop container \"fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2\" with signal terminated" Mar 19 11:54:27.477844 systemd-networkd[1342]: lxc_health: Link DOWN Mar 19 11:54:27.477851 systemd-networkd[1342]: lxc_health: Lost carrier Mar 19 11:54:27.491547 systemd[1]: cri-containerd-fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2.scope: Deactivated successfully. Mar 19 11:54:27.491875 systemd[1]: cri-containerd-fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2.scope: Consumed 6.254s CPU time, 126.2M memory peak, 136K read from disk, 12.9M written to disk. Mar 19 11:54:27.496937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93-rootfs.mount: Deactivated successfully. Mar 19 11:54:27.516452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2-rootfs.mount: Deactivated successfully. Mar 19 11:54:27.571997 containerd[1740]: time="2025-03-19T11:54:27.571747286Z" level=info msg="shim disconnected" id=be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93 namespace=k8s.io Mar 19 11:54:27.571997 containerd[1740]: time="2025-03-19T11:54:27.571806406Z" level=warning msg="cleaning up after shim disconnected" id=be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93 namespace=k8s.io Mar 19 11:54:27.571997 containerd[1740]: time="2025-03-19T11:54:27.571818167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:54:27.574836 containerd[1740]: time="2025-03-19T11:54:27.574566891Z" level=info msg="shim disconnected" id=fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2 namespace=k8s.io Mar 19 11:54:27.574836 containerd[1740]: time="2025-03-19T11:54:27.574641291Z" level=warning msg="cleaning up after shim disconnected" id=fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2 namespace=k8s.io Mar 19 11:54:27.574836 containerd[1740]: time="2025-03-19T11:54:27.574652491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:54:27.600497 containerd[1740]: time="2025-03-19T11:54:27.600398609Z" level=info msg="StopContainer for \"be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93\" returns successfully" Mar 19 11:54:27.601074 containerd[1740]: time="2025-03-19T11:54:27.601049250Z" level=info msg="StopPodSandbox for \"cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e\"" Mar 19 11:54:27.601129 containerd[1740]: time="2025-03-19T11:54:27.601087650Z" level=info msg="Container to stop \"be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:54:27.602884 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e-shm.mount: Deactivated successfully. Mar 19 11:54:27.606409 containerd[1740]: time="2025-03-19T11:54:27.606365257Z" level=info msg="StopContainer for \"fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2\" returns successfully" Mar 19 11:54:27.607062 containerd[1740]: time="2025-03-19T11:54:27.607033738Z" level=info msg="StopPodSandbox for \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\"" Mar 19 11:54:27.607152 containerd[1740]: time="2025-03-19T11:54:27.607070658Z" level=info msg="Container to stop \"04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:54:27.607152 containerd[1740]: time="2025-03-19T11:54:27.607082059Z" level=info msg="Container to stop \"fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:54:27.607152 containerd[1740]: time="2025-03-19T11:54:27.607090459Z" level=info msg="Container to stop \"2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:54:27.607152 containerd[1740]: time="2025-03-19T11:54:27.607100379Z" level=info msg="Container to stop \"ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:54:27.607152 containerd[1740]: time="2025-03-19T11:54:27.607108099Z" level=info msg="Container to stop \"8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:54:27.611250 systemd[1]: cri-containerd-cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e.scope: Deactivated successfully. Mar 19 11:54:27.615118 systemd[1]: cri-containerd-e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e.scope: Deactivated successfully. Mar 19 11:54:27.652365 containerd[1740]: time="2025-03-19T11:54:27.651953645Z" level=info msg="shim disconnected" id=cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e namespace=k8s.io Mar 19 11:54:27.652365 containerd[1740]: time="2025-03-19T11:54:27.652013845Z" level=warning msg="cleaning up after shim disconnected" id=cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e namespace=k8s.io Mar 19 11:54:27.652365 containerd[1740]: time="2025-03-19T11:54:27.652023045Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:54:27.655146 containerd[1740]: time="2025-03-19T11:54:27.655081289Z" level=info msg="shim disconnected" id=e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e namespace=k8s.io Mar 19 11:54:27.655146 containerd[1740]: time="2025-03-19T11:54:27.655147009Z" level=warning msg="cleaning up after shim disconnected" id=e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e namespace=k8s.io Mar 19 11:54:27.655291 containerd[1740]: time="2025-03-19T11:54:27.655156729Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:54:27.669356 containerd[1740]: time="2025-03-19T11:54:27.669210830Z" level=info msg="TearDown network for sandbox \"cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e\" successfully" Mar 19 11:54:27.669356 containerd[1740]: time="2025-03-19T11:54:27.669243750Z" level=info msg="StopPodSandbox for \"cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e\" returns successfully" Mar 19 11:54:27.670492 containerd[1740]: time="2025-03-19T11:54:27.670398792Z" level=warning msg="cleanup warnings time=\"2025-03-19T11:54:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 19 11:54:27.672082 containerd[1740]: time="2025-03-19T11:54:27.671906554Z" level=info msg="TearDown network for sandbox \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\" successfully" Mar 19 11:54:27.672082 containerd[1740]: time="2025-03-19T11:54:27.671932234Z" level=info msg="StopPodSandbox for \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\" returns successfully" Mar 19 11:54:27.796624 kubelet[3365]: I0319 11:54:27.796533 3365 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-host-proc-sys-net\") pod \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " Mar 19 11:54:27.796624 kubelet[3365]: I0319 11:54:27.796571 3365 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddwnx\" (UniqueName: \"kubernetes.io/projected/5a9412ec-7486-4091-9abe-7f4b62dcaabc-kube-api-access-ddwnx\") pod \"5a9412ec-7486-4091-9abe-7f4b62dcaabc\" (UID: \"5a9412ec-7486-4091-9abe-7f4b62dcaabc\") " Mar 19 11:54:27.796624 kubelet[3365]: I0319 11:54:27.796591 3365 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-hubble-tls\") pod \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " Mar 19 11:54:27.796624 kubelet[3365]: I0319 11:54:27.796612 3365 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a9412ec-7486-4091-9abe-7f4b62dcaabc-cilium-config-path\") pod \"5a9412ec-7486-4091-9abe-7f4b62dcaabc\" (UID: \"5a9412ec-7486-4091-9abe-7f4b62dcaabc\") " Mar 19 11:54:27.796624 kubelet[3365]: I0319 11:54:27.796633 3365 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-lib-modules\") pod \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " Mar 19 11:54:27.797120 kubelet[3365]: I0319 11:54:27.796648 3365 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-cilium-run\") pod \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " Mar 19 11:54:27.797120 kubelet[3365]: I0319 11:54:27.796661 3365 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-host-proc-sys-kernel\") pod \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " Mar 19 11:54:27.797120 kubelet[3365]: I0319 11:54:27.796676 3365 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-xtables-lock\") pod \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " Mar 19 11:54:27.797120 kubelet[3365]: I0319 11:54:27.796692 3365 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-clustermesh-secrets\") pod \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " Mar 19 11:54:27.797120 kubelet[3365]: I0319 11:54:27.796708 3365 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-hostproc\") pod \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " Mar 19 11:54:27.797120 kubelet[3365]: I0319 11:54:27.796739 3365 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-cilium-config-path\") pod \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " Mar 19 11:54:27.797247 kubelet[3365]: I0319 11:54:27.796756 3365 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-cni-path\") pod \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " Mar 19 11:54:27.797247 kubelet[3365]: I0319 11:54:27.796770 3365 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-cilium-cgroup\") pod \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " Mar 19 11:54:27.797247 kubelet[3365]: I0319 11:54:27.796788 3365 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj44r\" (UniqueName: \"kubernetes.io/projected/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-kube-api-access-gj44r\") pod \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " Mar 19 11:54:27.797247 kubelet[3365]: I0319 11:54:27.796805 3365 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-etc-cni-netd\") pod \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " Mar 19 11:54:27.797247 kubelet[3365]: I0319 11:54:27.796819 3365 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-bpf-maps\") pod \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\" (UID: \"e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3\") " Mar 19 11:54:27.797247 kubelet[3365]: I0319 11:54:27.796894 3365 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" (UID: "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:54:27.797389 kubelet[3365]: I0319 11:54:27.796927 3365 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" (UID: "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:54:27.801532 kubelet[3365]: I0319 11:54:27.800793 3365 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" (UID: "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:54:27.801532 kubelet[3365]: I0319 11:54:27.800850 3365 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" (UID: "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:54:27.801532 kubelet[3365]: I0319 11:54:27.800867 3365 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" (UID: "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:54:27.801532 kubelet[3365]: I0319 11:54:27.800884 3365 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" (UID: "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:54:27.801532 kubelet[3365]: I0319 11:54:27.801202 3365 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" (UID: "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:54:27.801964 kubelet[3365]: I0319 11:54:27.801285 3365 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" (UID: "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 11:54:27.801964 kubelet[3365]: I0319 11:54:27.801369 3365 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-hostproc" (OuterVolumeSpecName: "hostproc") pod "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" (UID: "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:54:27.801964 kubelet[3365]: I0319 11:54:27.801543 3365 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-cni-path" (OuterVolumeSpecName: "cni-path") pod "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" (UID: "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:54:27.801964 kubelet[3365]: I0319 11:54:27.801575 3365 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" (UID: "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:54:27.801964 kubelet[3365]: I0319 11:54:27.801672 3365 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a9412ec-7486-4091-9abe-7f4b62dcaabc-kube-api-access-ddwnx" (OuterVolumeSpecName: "kube-api-access-ddwnx") pod "5a9412ec-7486-4091-9abe-7f4b62dcaabc" (UID: "5a9412ec-7486-4091-9abe-7f4b62dcaabc"). InnerVolumeSpecName "kube-api-access-ddwnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:54:27.802079 kubelet[3365]: I0319 11:54:27.801699 3365 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" (UID: "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:54:27.802566 kubelet[3365]: E0319 11:54:27.802288 3365 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 19 11:54:27.803577 kubelet[3365]: I0319 11:54:27.803442 3365 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a9412ec-7486-4091-9abe-7f4b62dcaabc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5a9412ec-7486-4091-9abe-7f4b62dcaabc" (UID: "5a9412ec-7486-4091-9abe-7f4b62dcaabc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:54:27.804803 kubelet[3365]: I0319 11:54:27.804749 3365 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" (UID: "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:54:27.805302 kubelet[3365]: I0319 11:54:27.805279 3365 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-kube-api-access-gj44r" (OuterVolumeSpecName: "kube-api-access-gj44r") pod "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" (UID: "e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3"). InnerVolumeSpecName "kube-api-access-gj44r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:54:27.897821 kubelet[3365]: I0319 11:54:27.897690 3365 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-lib-modules\") on node \"ci-4230.1.0-a-361b280840\" DevicePath \"\"" Mar 19 11:54:27.897821 kubelet[3365]: I0319 11:54:27.897757 3365 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-cilium-run\") on node \"ci-4230.1.0-a-361b280840\" DevicePath \"\"" Mar 19 11:54:27.897821 kubelet[3365]: I0319 11:54:27.897768 3365 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-host-proc-sys-kernel\") on node \"ci-4230.1.0-a-361b280840\" DevicePath \"\"" Mar 19 11:54:27.897821 kubelet[3365]: I0319 11:54:27.897778 3365 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-xtables-lock\") on node \"ci-4230.1.0-a-361b280840\" DevicePath \"\"" Mar 19 11:54:27.897821 kubelet[3365]: I0319 11:54:27.897788 3365 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-clustermesh-secrets\") on node \"ci-4230.1.0-a-361b280840\" DevicePath \"\"" Mar 19 11:54:27.897821 kubelet[3365]: I0319 11:54:27.897796 3365 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-hostproc\") on node \"ci-4230.1.0-a-361b280840\" DevicePath \"\"" Mar 19 11:54:27.898188 kubelet[3365]: I0319 11:54:27.897805 3365 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-cilium-config-path\") on node \"ci-4230.1.0-a-361b280840\" DevicePath \"\"" Mar 19 11:54:27.898188 kubelet[3365]: I0319 11:54:27.898089 3365 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-cni-path\") on node \"ci-4230.1.0-a-361b280840\" DevicePath \"\"" Mar 19 11:54:27.898188 kubelet[3365]: I0319 11:54:27.898099 3365 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-cilium-cgroup\") on node \"ci-4230.1.0-a-361b280840\" DevicePath \"\"" Mar 19 11:54:27.898188 kubelet[3365]: I0319 11:54:27.898107 3365 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gj44r\" (UniqueName: \"kubernetes.io/projected/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-kube-api-access-gj44r\") on node \"ci-4230.1.0-a-361b280840\" DevicePath \"\"" Mar 19 11:54:27.898188 kubelet[3365]: I0319 11:54:27.898115 3365 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-etc-cni-netd\") on node \"ci-4230.1.0-a-361b280840\" DevicePath \"\"" Mar 19 11:54:27.898188 kubelet[3365]: I0319 11:54:27.898124 3365 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-bpf-maps\") on node \"ci-4230.1.0-a-361b280840\" DevicePath \"\"" Mar 19 11:54:27.898188 kubelet[3365]: I0319 11:54:27.898134 3365 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-host-proc-sys-net\") on node \"ci-4230.1.0-a-361b280840\" DevicePath \"\"" Mar 19 11:54:27.898188 kubelet[3365]: I0319 11:54:27.898157 3365 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ddwnx\" (UniqueName: \"kubernetes.io/projected/5a9412ec-7486-4091-9abe-7f4b62dcaabc-kube-api-access-ddwnx\") on node \"ci-4230.1.0-a-361b280840\" DevicePath \"\"" Mar 19 11:54:27.898387 kubelet[3365]: I0319 11:54:27.898167 3365 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3-hubble-tls\") on node \"ci-4230.1.0-a-361b280840\" DevicePath \"\"" Mar 19 11:54:27.898387 kubelet[3365]: I0319 11:54:27.898176 3365 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a9412ec-7486-4091-9abe-7f4b62dcaabc-cilium-config-path\") on node \"ci-4230.1.0-a-361b280840\" DevicePath \"\"" Mar 19 11:54:28.033430 kubelet[3365]: I0319 11:54:28.033391 3365 scope.go:117] "RemoveContainer" containerID="be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93" Mar 19 11:54:28.036409 containerd[1740]: time="2025-03-19T11:54:28.036203811Z" level=info msg="RemoveContainer for \"be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93\"" Mar 19 11:54:28.041682 systemd[1]: Removed slice kubepods-besteffort-pod5a9412ec_7486_4091_9abe_7f4b62dcaabc.slice - libcontainer container kubepods-besteffort-pod5a9412ec_7486_4091_9abe_7f4b62dcaabc.slice. Mar 19 11:54:28.049023 systemd[1]: Removed slice kubepods-burstable-pode6f21c8a_30e6_4ac4_b3da_9d1ff91413f3.slice - libcontainer container kubepods-burstable-pode6f21c8a_30e6_4ac4_b3da_9d1ff91413f3.slice. Mar 19 11:54:28.049359 systemd[1]: kubepods-burstable-pode6f21c8a_30e6_4ac4_b3da_9d1ff91413f3.slice: Consumed 6.327s CPU time, 126.6M memory peak, 136K read from disk, 12.9M written to disk. Mar 19 11:54:28.052763 containerd[1740]: time="2025-03-19T11:54:28.052580035Z" level=info msg="RemoveContainer for \"be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93\" returns successfully" Mar 19 11:54:28.053050 kubelet[3365]: I0319 11:54:28.053026 3365 scope.go:117] "RemoveContainer" containerID="be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93" Mar 19 11:54:28.053466 containerd[1740]: time="2025-03-19T11:54:28.053429357Z" level=error msg="ContainerStatus for \"be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93\": not found" Mar 19 11:54:28.053676 kubelet[3365]: E0319 11:54:28.053620 3365 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93\": not found" containerID="be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93" Mar 19 11:54:28.053965 kubelet[3365]: I0319 11:54:28.053652 3365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93"} err="failed to get container status \"be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93\": rpc error: code = NotFound desc = an error occurred when try to find container \"be409e02595f78fbed6d5029300bba5b61b036cc67abf6efb23d84fa5caa4b93\": not found" Mar 19 11:54:28.053965 kubelet[3365]: I0319 11:54:28.053818 3365 scope.go:117] "RemoveContainer" containerID="fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2" Mar 19 11:54:28.055149 containerd[1740]: time="2025-03-19T11:54:28.055116799Z" level=info msg="RemoveContainer for \"fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2\"" Mar 19 11:54:28.067295 containerd[1740]: time="2025-03-19T11:54:28.066264376Z" level=info msg="RemoveContainer for \"fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2\" returns successfully" Mar 19 11:54:28.067449 kubelet[3365]: I0319 11:54:28.066521 3365 scope.go:117] "RemoveContainer" containerID="8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201" Mar 19 11:54:28.067725 containerd[1740]: time="2025-03-19T11:54:28.067599578Z" level=info msg="RemoveContainer for \"8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201\"" Mar 19 11:54:28.079558 containerd[1740]: time="2025-03-19T11:54:28.079497875Z" level=info msg="RemoveContainer for \"8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201\" returns successfully" Mar 19 11:54:28.079859 kubelet[3365]: I0319 11:54:28.079788 3365 scope.go:117] "RemoveContainer" containerID="ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1" Mar 19 11:54:28.081361 containerd[1740]: time="2025-03-19T11:54:28.081237438Z" level=info msg="RemoveContainer for \"ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1\"" Mar 19 11:54:28.096054 containerd[1740]: time="2025-03-19T11:54:28.096010939Z" level=info msg="RemoveContainer for \"ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1\" returns successfully" Mar 19 11:54:28.096392 kubelet[3365]: I0319 11:54:28.096262 3365 scope.go:117] "RemoveContainer" containerID="04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936" Mar 19 11:54:28.097389 containerd[1740]: time="2025-03-19T11:54:28.097359661Z" level=info msg="RemoveContainer for \"04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936\"" Mar 19 11:54:28.107415 containerd[1740]: time="2025-03-19T11:54:28.107375756Z" level=info msg="RemoveContainer for \"04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936\" returns successfully" Mar 19 11:54:28.107704 kubelet[3365]: I0319 11:54:28.107629 3365 scope.go:117] "RemoveContainer" containerID="2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7" Mar 19 11:54:28.108908 containerd[1740]: time="2025-03-19T11:54:28.108873798Z" level=info msg="RemoveContainer for \"2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7\"" Mar 19 11:54:28.128882 containerd[1740]: time="2025-03-19T11:54:28.128833548Z" level=info msg="RemoveContainer for \"2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7\" returns successfully" Mar 19 11:54:28.129883 kubelet[3365]: I0319 11:54:28.129588 3365 scope.go:117] "RemoveContainer" containerID="fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2" Mar 19 11:54:28.130220 containerd[1740]: time="2025-03-19T11:54:28.130182870Z" level=error msg="ContainerStatus for \"fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2\": not found" Mar 19 11:54:28.130672 kubelet[3365]: E0319 11:54:28.130561 3365 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2\": not found" containerID="fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2" Mar 19 11:54:28.130969 kubelet[3365]: I0319 11:54:28.130766 3365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2"} err="failed to get container status \"fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb9f5058585ca72a2ad7249838a2501000afbab8fb74ded9f739352db584a1f2\": not found" Mar 19 11:54:28.130969 kubelet[3365]: I0319 11:54:28.130796 3365 scope.go:117] "RemoveContainer" containerID="8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201" Mar 19 11:54:28.132637 containerd[1740]: time="2025-03-19T11:54:28.132344393Z" level=error msg="ContainerStatus for \"8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201\": not found" Mar 19 11:54:28.132745 kubelet[3365]: E0319 11:54:28.132518 3365 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201\": not found" containerID="8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201" Mar 19 11:54:28.132745 kubelet[3365]: I0319 11:54:28.132546 3365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201"} err="failed to get container status \"8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ad3f982e6dd607d0a0346da53130130a394db50b9fc6a1dad11eaf47d5d8201\": not found" Mar 19 11:54:28.132745 kubelet[3365]: I0319 11:54:28.132567 3365 scope.go:117] "RemoveContainer" containerID="ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1" Mar 19 11:54:28.134411 containerd[1740]: time="2025-03-19T11:54:28.132989874Z" level=error msg="ContainerStatus for \"ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1\": not found" Mar 19 11:54:28.134411 containerd[1740]: time="2025-03-19T11:54:28.133525035Z" level=error msg="ContainerStatus for \"04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936\": not found" Mar 19 11:54:28.134411 containerd[1740]: time="2025-03-19T11:54:28.133942355Z" level=error msg="ContainerStatus for \"2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7\": not found" Mar 19 11:54:28.134544 kubelet[3365]: E0319 11:54:28.133207 3365 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1\": not found" containerID="ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1" Mar 19 11:54:28.134544 kubelet[3365]: I0319 11:54:28.133231 3365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1"} err="failed to get container status \"ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1\": rpc error: code = NotFound desc = an error occurred when try to find container \"ccd0f2d5e939f986189309ed6dd5c4ed074fd3a3cd9ad3ff31b4cdce3912ebf1\": not found" Mar 19 11:54:28.134544 kubelet[3365]: I0319 11:54:28.133251 3365 scope.go:117] "RemoveContainer" containerID="04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936" Mar 19 11:54:28.134544 kubelet[3365]: E0319 11:54:28.133645 3365 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936\": not found" containerID="04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936" Mar 19 11:54:28.134544 kubelet[3365]: I0319 11:54:28.133665 3365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936"} err="failed to get container status \"04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936\": rpc error: code = NotFound desc = an error occurred when try to find container \"04766b0b5da0b34dd93c11674c0dd696d8d0716ea842a34865f8294256e08936\": not found" Mar 19 11:54:28.134544 kubelet[3365]: I0319 11:54:28.133677 3365 scope.go:117] "RemoveContainer" containerID="2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7" Mar 19 11:54:28.134672 kubelet[3365]: E0319 11:54:28.134100 3365 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7\": not found" containerID="2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7" Mar 19 11:54:28.134672 kubelet[3365]: I0319 11:54:28.134119 3365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7"} err="failed to get container status \"2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"2daecdeab499d96c908bbdc6211b15ce8525a0bdb69c102c37da1e145e8030f7\": not found" Mar 19 11:54:28.439314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e-rootfs.mount: Deactivated successfully. Mar 19 11:54:28.439893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e-rootfs.mount: Deactivated successfully. Mar 19 11:54:28.440343 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e-shm.mount: Deactivated successfully. Mar 19 11:54:28.440414 systemd[1]: var-lib-kubelet-pods-e6f21c8a\x2d30e6\x2d4ac4\x2db3da\x2d9d1ff91413f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgj44r.mount: Deactivated successfully. Mar 19 11:54:28.440467 systemd[1]: var-lib-kubelet-pods-5a9412ec\x2d7486\x2d4091\x2d9abe\x2d7f4b62dcaabc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dddwnx.mount: Deactivated successfully. Mar 19 11:54:28.440517 systemd[1]: var-lib-kubelet-pods-e6f21c8a\x2d30e6\x2d4ac4\x2db3da\x2d9d1ff91413f3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 19 11:54:28.440568 systemd[1]: var-lib-kubelet-pods-e6f21c8a\x2d30e6\x2d4ac4\x2db3da\x2d9d1ff91413f3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 19 11:54:29.439820 sshd[4978]: Connection closed by 10.200.16.10 port 40186 Mar 19 11:54:29.440453 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Mar 19 11:54:29.444325 systemd[1]: sshd@22-10.200.20.11:22-10.200.16.10:40186.service: Deactivated successfully. Mar 19 11:54:29.446326 systemd[1]: session-25.scope: Deactivated successfully. Mar 19 11:54:29.446546 systemd[1]: session-25.scope: Consumed 1.060s CPU time, 21.5M memory peak. Mar 19 11:54:29.447445 systemd-logind[1713]: Session 25 logged out. Waiting for processes to exit. Mar 19 11:54:29.448353 systemd-logind[1713]: Removed session 25. Mar 19 11:54:29.535057 systemd[1]: Started sshd@23-10.200.20.11:22-10.200.16.10:52626.service - OpenSSH per-connection server daemon (10.200.16.10:52626). Mar 19 11:54:29.698532 kubelet[3365]: I0319 11:54:29.698420 3365 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a9412ec-7486-4091-9abe-7f4b62dcaabc" path="/var/lib/kubelet/pods/5a9412ec-7486-4091-9abe-7f4b62dcaabc/volumes" Mar 19 11:54:29.698905 kubelet[3365]: I0319 11:54:29.698867 3365 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" path="/var/lib/kubelet/pods/e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3/volumes" Mar 19 11:54:30.023384 sshd[5137]: Accepted publickey for core from 10.200.16.10 port 52626 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:54:30.024704 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:54:30.028807 systemd-logind[1713]: New session 26 of user core. Mar 19 11:54:30.034978 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 19 11:54:31.050226 kubelet[3365]: E0319 11:54:31.049042 3365 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" containerName="apply-sysctl-overwrites" Mar 19 11:54:31.050226 kubelet[3365]: E0319 11:54:31.049073 3365 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a9412ec-7486-4091-9abe-7f4b62dcaabc" containerName="cilium-operator" Mar 19 11:54:31.050226 kubelet[3365]: E0319 11:54:31.049081 3365 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" containerName="clean-cilium-state" Mar 19 11:54:31.050226 kubelet[3365]: E0319 11:54:31.049088 3365 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" containerName="mount-cgroup" Mar 19 11:54:31.050226 kubelet[3365]: E0319 11:54:31.049094 3365 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" containerName="mount-bpf-fs" Mar 19 11:54:31.050226 kubelet[3365]: E0319 11:54:31.049101 3365 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" containerName="cilium-agent" Mar 19 11:54:31.050226 kubelet[3365]: I0319 11:54:31.049125 3365 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6f21c8a-30e6-4ac4-b3da-9d1ff91413f3" containerName="cilium-agent" Mar 19 11:54:31.050226 kubelet[3365]: I0319 11:54:31.049131 3365 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a9412ec-7486-4091-9abe-7f4b62dcaabc" containerName="cilium-operator" Mar 19 11:54:31.062013 systemd[1]: Created slice kubepods-burstable-pod96dbde79_c206_49a3_a6e6_1916fc3bf82c.slice - libcontainer container kubepods-burstable-pod96dbde79_c206_49a3_a6e6_1916fc3bf82c.slice. Mar 19 11:54:31.107203 sshd[5139]: Connection closed by 10.200.16.10 port 52626 Mar 19 11:54:31.110492 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Mar 19 11:54:31.115803 kubelet[3365]: I0319 11:54:31.114946 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/96dbde79-c206-49a3-a6e6-1916fc3bf82c-cni-path\") pod \"cilium-d6xhh\" (UID: \"96dbde79-c206-49a3-a6e6-1916fc3bf82c\") " pod="kube-system/cilium-d6xhh" Mar 19 11:54:31.115803 kubelet[3365]: I0319 11:54:31.114984 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/96dbde79-c206-49a3-a6e6-1916fc3bf82c-clustermesh-secrets\") pod \"cilium-d6xhh\" (UID: \"96dbde79-c206-49a3-a6e6-1916fc3bf82c\") " pod="kube-system/cilium-d6xhh" Mar 19 11:54:31.115803 kubelet[3365]: I0319 11:54:31.115004 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/96dbde79-c206-49a3-a6e6-1916fc3bf82c-hubble-tls\") pod \"cilium-d6xhh\" (UID: \"96dbde79-c206-49a3-a6e6-1916fc3bf82c\") " pod="kube-system/cilium-d6xhh" Mar 19 11:54:31.115803 kubelet[3365]: I0319 11:54:31.115022 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/96dbde79-c206-49a3-a6e6-1916fc3bf82c-cilium-cgroup\") pod \"cilium-d6xhh\" (UID: \"96dbde79-c206-49a3-a6e6-1916fc3bf82c\") " pod="kube-system/cilium-d6xhh" Mar 19 11:54:31.115803 kubelet[3365]: I0319 11:54:31.115037 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96dbde79-c206-49a3-a6e6-1916fc3bf82c-lib-modules\") pod \"cilium-d6xhh\" (UID: \"96dbde79-c206-49a3-a6e6-1916fc3bf82c\") " pod="kube-system/cilium-d6xhh" Mar 19 11:54:31.115803 kubelet[3365]: I0319 11:54:31.115052 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/96dbde79-c206-49a3-a6e6-1916fc3bf82c-host-proc-sys-net\") pod \"cilium-d6xhh\" (UID: \"96dbde79-c206-49a3-a6e6-1916fc3bf82c\") " pod="kube-system/cilium-d6xhh" Mar 19 11:54:31.116221 systemd[1]: sshd@23-10.200.20.11:22-10.200.16.10:52626.service: Deactivated successfully. Mar 19 11:54:31.116406 kubelet[3365]: I0319 11:54:31.115067 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/96dbde79-c206-49a3-a6e6-1916fc3bf82c-cilium-run\") pod \"cilium-d6xhh\" (UID: \"96dbde79-c206-49a3-a6e6-1916fc3bf82c\") " pod="kube-system/cilium-d6xhh" Mar 19 11:54:31.116406 kubelet[3365]: I0319 11:54:31.115081 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/96dbde79-c206-49a3-a6e6-1916fc3bf82c-bpf-maps\") pod \"cilium-d6xhh\" (UID: \"96dbde79-c206-49a3-a6e6-1916fc3bf82c\") " pod="kube-system/cilium-d6xhh" Mar 19 11:54:31.116406 kubelet[3365]: I0319 11:54:31.115096 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/96dbde79-c206-49a3-a6e6-1916fc3bf82c-hostproc\") pod \"cilium-d6xhh\" (UID: \"96dbde79-c206-49a3-a6e6-1916fc3bf82c\") " pod="kube-system/cilium-d6xhh" Mar 19 11:54:31.116406 kubelet[3365]: I0319 11:54:31.115112 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96dbde79-c206-49a3-a6e6-1916fc3bf82c-etc-cni-netd\") pod \"cilium-d6xhh\" (UID: \"96dbde79-c206-49a3-a6e6-1916fc3bf82c\") " pod="kube-system/cilium-d6xhh" Mar 19 11:54:31.116406 kubelet[3365]: I0319 11:54:31.115127 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96dbde79-c206-49a3-a6e6-1916fc3bf82c-xtables-lock\") pod \"cilium-d6xhh\" (UID: \"96dbde79-c206-49a3-a6e6-1916fc3bf82c\") " pod="kube-system/cilium-d6xhh" Mar 19 11:54:31.116406 kubelet[3365]: I0319 11:54:31.115144 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96dbde79-c206-49a3-a6e6-1916fc3bf82c-cilium-config-path\") pod \"cilium-d6xhh\" (UID: \"96dbde79-c206-49a3-a6e6-1916fc3bf82c\") " pod="kube-system/cilium-d6xhh" Mar 19 11:54:31.116535 kubelet[3365]: I0319 11:54:31.115158 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/96dbde79-c206-49a3-a6e6-1916fc3bf82c-cilium-ipsec-secrets\") pod \"cilium-d6xhh\" (UID: \"96dbde79-c206-49a3-a6e6-1916fc3bf82c\") " pod="kube-system/cilium-d6xhh" Mar 19 11:54:31.116535 kubelet[3365]: I0319 11:54:31.115172 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/96dbde79-c206-49a3-a6e6-1916fc3bf82c-host-proc-sys-kernel\") pod \"cilium-d6xhh\" (UID: \"96dbde79-c206-49a3-a6e6-1916fc3bf82c\") " pod="kube-system/cilium-d6xhh" Mar 19 11:54:31.116535 kubelet[3365]: I0319 11:54:31.115188 3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v62d2\" (UniqueName: \"kubernetes.io/projected/96dbde79-c206-49a3-a6e6-1916fc3bf82c-kube-api-access-v62d2\") pod \"cilium-d6xhh\" (UID: \"96dbde79-c206-49a3-a6e6-1916fc3bf82c\") " pod="kube-system/cilium-d6xhh" Mar 19 11:54:31.119609 systemd[1]: session-26.scope: Deactivated successfully. Mar 19 11:54:31.122287 systemd-logind[1713]: Session 26 logged out. Waiting for processes to exit. Mar 19 11:54:31.124863 systemd-logind[1713]: Removed session 26. Mar 19 11:54:31.202979 systemd[1]: Started sshd@24-10.200.20.11:22-10.200.16.10:52640.service - OpenSSH per-connection server daemon (10.200.16.10:52640). Mar 19 11:54:31.370659 containerd[1740]: time="2025-03-19T11:54:31.370006007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d6xhh,Uid:96dbde79-c206-49a3-a6e6-1916fc3bf82c,Namespace:kube-system,Attempt:0,}" Mar 19 11:54:31.408660 containerd[1740]: time="2025-03-19T11:54:31.408349944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:54:31.408660 containerd[1740]: time="2025-03-19T11:54:31.408410144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:54:31.408660 containerd[1740]: time="2025-03-19T11:54:31.408425184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:54:31.409327 containerd[1740]: time="2025-03-19T11:54:31.409245625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:54:31.426899 systemd[1]: Started cri-containerd-b56d2de153809d3f78a913aa184b7cbc2934f242a84e25068a317ef0e4f5ee08.scope - libcontainer container b56d2de153809d3f78a913aa184b7cbc2934f242a84e25068a317ef0e4f5ee08. Mar 19 11:54:31.447348 containerd[1740]: time="2025-03-19T11:54:31.447302361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d6xhh,Uid:96dbde79-c206-49a3-a6e6-1916fc3bf82c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b56d2de153809d3f78a913aa184b7cbc2934f242a84e25068a317ef0e4f5ee08\"" Mar 19 11:54:31.451286 containerd[1740]: time="2025-03-19T11:54:31.451229167Z" level=info msg="CreateContainer within sandbox \"b56d2de153809d3f78a913aa184b7cbc2934f242a84e25068a317ef0e4f5ee08\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 19 11:54:31.490846 containerd[1740]: time="2025-03-19T11:54:31.490794945Z" level=info msg="CreateContainer within sandbox \"b56d2de153809d3f78a913aa184b7cbc2934f242a84e25068a317ef0e4f5ee08\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"291071cbc99f0e1e7cc299441a7dc91a32a8963d9f320a5878a4b72986ddcbb7\"" Mar 19 11:54:31.491367 containerd[1740]: time="2025-03-19T11:54:31.491316866Z" level=info msg="StartContainer for \"291071cbc99f0e1e7cc299441a7dc91a32a8963d9f320a5878a4b72986ddcbb7\"" Mar 19 11:54:31.516925 systemd[1]: Started cri-containerd-291071cbc99f0e1e7cc299441a7dc91a32a8963d9f320a5878a4b72986ddcbb7.scope - libcontainer container 291071cbc99f0e1e7cc299441a7dc91a32a8963d9f320a5878a4b72986ddcbb7. Mar 19 11:54:31.542244 containerd[1740]: time="2025-03-19T11:54:31.542118621Z" level=info msg="StartContainer for \"291071cbc99f0e1e7cc299441a7dc91a32a8963d9f320a5878a4b72986ddcbb7\" returns successfully" Mar 19 11:54:31.547317 systemd[1]: cri-containerd-291071cbc99f0e1e7cc299441a7dc91a32a8963d9f320a5878a4b72986ddcbb7.scope: Deactivated successfully. Mar 19 11:54:31.642465 containerd[1740]: time="2025-03-19T11:54:31.642311609Z" level=info msg="shim disconnected" id=291071cbc99f0e1e7cc299441a7dc91a32a8963d9f320a5878a4b72986ddcbb7 namespace=k8s.io Mar 19 11:54:31.642465 containerd[1740]: time="2025-03-19T11:54:31.642382489Z" level=warning msg="cleaning up after shim disconnected" id=291071cbc99f0e1e7cc299441a7dc91a32a8963d9f320a5878a4b72986ddcbb7 namespace=k8s.io Mar 19 11:54:31.642465 containerd[1740]: time="2025-03-19T11:54:31.642392289Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:54:31.692510 sshd[5150]: Accepted publickey for core from 10.200.16.10 port 52640 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:54:31.693896 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:54:31.698788 systemd-logind[1713]: New session 27 of user core. Mar 19 11:54:31.707893 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 19 11:54:31.800727 kubelet[3365]: I0319 11:54:31.800664 3365 setters.go:600] "Node became not ready" node="ci-4230.1.0-a-361b280840" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-19T11:54:31Z","lastTransitionTime":"2025-03-19T11:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 19 11:54:32.036741 sshd[5257]: Connection closed by 10.200.16.10 port 52640 Mar 19 11:54:32.037314 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Mar 19 11:54:32.040217 systemd[1]: sshd@24-10.200.20.11:22-10.200.16.10:52640.service: Deactivated successfully. Mar 19 11:54:32.042962 systemd[1]: session-27.scope: Deactivated successfully. Mar 19 11:54:32.044662 systemd-logind[1713]: Session 27 logged out. Waiting for processes to exit. Mar 19 11:54:32.046157 systemd-logind[1713]: Removed session 27. Mar 19 11:54:32.056020 containerd[1740]: time="2025-03-19T11:54:32.055882339Z" level=info msg="CreateContainer within sandbox \"b56d2de153809d3f78a913aa184b7cbc2934f242a84e25068a317ef0e4f5ee08\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 19 11:54:32.105567 containerd[1740]: time="2025-03-19T11:54:32.105516732Z" level=info msg="CreateContainer within sandbox \"b56d2de153809d3f78a913aa184b7cbc2934f242a84e25068a317ef0e4f5ee08\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ab6d6c07fbee2fdabb7a48bb63464b660ea80a4ffee80892f56cb7512f5834db\"" Mar 19 11:54:32.106429 containerd[1740]: time="2025-03-19T11:54:32.106185733Z" level=info msg="StartContainer for \"ab6d6c07fbee2fdabb7a48bb63464b660ea80a4ffee80892f56cb7512f5834db\"" Mar 19 11:54:32.129667 systemd[1]: Started sshd@25-10.200.20.11:22-10.200.16.10:52650.service - OpenSSH per-connection server daemon (10.200.16.10:52650). Mar 19 11:54:32.132580 systemd[1]: Started cri-containerd-ab6d6c07fbee2fdabb7a48bb63464b660ea80a4ffee80892f56cb7512f5834db.scope - libcontainer container ab6d6c07fbee2fdabb7a48bb63464b660ea80a4ffee80892f56cb7512f5834db. Mar 19 11:54:32.164238 containerd[1740]: time="2025-03-19T11:54:32.164180618Z" level=info msg="StartContainer for \"ab6d6c07fbee2fdabb7a48bb63464b660ea80a4ffee80892f56cb7512f5834db\" returns successfully" Mar 19 11:54:32.168496 systemd[1]: cri-containerd-ab6d6c07fbee2fdabb7a48bb63464b660ea80a4ffee80892f56cb7512f5834db.scope: Deactivated successfully. Mar 19 11:54:32.209194 containerd[1740]: time="2025-03-19T11:54:32.209099605Z" level=info msg="shim disconnected" id=ab6d6c07fbee2fdabb7a48bb63464b660ea80a4ffee80892f56cb7512f5834db namespace=k8s.io Mar 19 11:54:32.209194 containerd[1740]: time="2025-03-19T11:54:32.209172685Z" level=warning msg="cleaning up after shim disconnected" id=ab6d6c07fbee2fdabb7a48bb63464b660ea80a4ffee80892f56cb7512f5834db namespace=k8s.io Mar 19 11:54:32.209194 containerd[1740]: time="2025-03-19T11:54:32.209182565Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:54:32.620512 sshd[5283]: Accepted publickey for core from 10.200.16.10 port 52650 ssh2: RSA SHA256:d+xrPHylaDXCx3B+Zw4uQo0w7XPShM/0kG0JbHK/coM Mar 19 11:54:32.621860 sshd-session[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:54:32.626251 systemd-logind[1713]: New session 28 of user core. Mar 19 11:54:32.636091 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 19 11:54:32.803269 kubelet[3365]: E0319 11:54:32.803229 3365 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 19 11:54:33.059508 containerd[1740]: time="2025-03-19T11:54:33.059277778Z" level=info msg="CreateContainer within sandbox \"b56d2de153809d3f78a913aa184b7cbc2934f242a84e25068a317ef0e4f5ee08\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 19 11:54:33.103029 containerd[1740]: time="2025-03-19T11:54:33.102973723Z" level=info msg="CreateContainer within sandbox \"b56d2de153809d3f78a913aa184b7cbc2934f242a84e25068a317ef0e4f5ee08\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dba8b6fe33d41ca6f94b443402c500e81270f47627b4644018d1163acae98cf5\"" Mar 19 11:54:33.103758 containerd[1740]: time="2025-03-19T11:54:33.103671644Z" level=info msg="StartContainer for \"dba8b6fe33d41ca6f94b443402c500e81270f47627b4644018d1163acae98cf5\"" Mar 19 11:54:33.136930 systemd[1]: Started cri-containerd-dba8b6fe33d41ca6f94b443402c500e81270f47627b4644018d1163acae98cf5.scope - libcontainer container dba8b6fe33d41ca6f94b443402c500e81270f47627b4644018d1163acae98cf5. Mar 19 11:54:33.166389 systemd[1]: cri-containerd-dba8b6fe33d41ca6f94b443402c500e81270f47627b4644018d1163acae98cf5.scope: Deactivated successfully. Mar 19 11:54:33.168917 containerd[1740]: time="2025-03-19T11:54:33.168410219Z" level=info msg="StartContainer for \"dba8b6fe33d41ca6f94b443402c500e81270f47627b4644018d1163acae98cf5\" returns successfully" Mar 19 11:54:33.205430 containerd[1740]: time="2025-03-19T11:54:33.205245714Z" level=info msg="shim disconnected" id=dba8b6fe33d41ca6f94b443402c500e81270f47627b4644018d1163acae98cf5 namespace=k8s.io Mar 19 11:54:33.205430 containerd[1740]: time="2025-03-19T11:54:33.205298554Z" level=warning msg="cleaning up after shim disconnected" id=dba8b6fe33d41ca6f94b443402c500e81270f47627b4644018d1163acae98cf5 namespace=k8s.io Mar 19 11:54:33.205430 containerd[1740]: time="2025-03-19T11:54:33.205308234Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:54:33.224446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dba8b6fe33d41ca6f94b443402c500e81270f47627b4644018d1163acae98cf5-rootfs.mount: Deactivated successfully. Mar 19 11:54:34.063207 containerd[1740]: time="2025-03-19T11:54:34.063039938Z" level=info msg="CreateContainer within sandbox \"b56d2de153809d3f78a913aa184b7cbc2934f242a84e25068a317ef0e4f5ee08\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 19 11:54:34.103229 containerd[1740]: time="2025-03-19T11:54:34.103155118Z" level=info msg="CreateContainer within sandbox \"b56d2de153809d3f78a913aa184b7cbc2934f242a84e25068a317ef0e4f5ee08\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fa803451b18d4b5b71ea9656bf73fc8a4e4f4de5021cfdf3949a39e0bd041d16\"" Mar 19 11:54:34.104742 containerd[1740]: time="2025-03-19T11:54:34.104491040Z" level=info msg="StartContainer for \"fa803451b18d4b5b71ea9656bf73fc8a4e4f4de5021cfdf3949a39e0bd041d16\"" Mar 19 11:54:34.128839 systemd[1]: run-containerd-runc-k8s.io-fa803451b18d4b5b71ea9656bf73fc8a4e4f4de5021cfdf3949a39e0bd041d16-runc.RFp0T7.mount: Deactivated successfully. Mar 19 11:54:34.135926 systemd[1]: Started cri-containerd-fa803451b18d4b5b71ea9656bf73fc8a4e4f4de5021cfdf3949a39e0bd041d16.scope - libcontainer container fa803451b18d4b5b71ea9656bf73fc8a4e4f4de5021cfdf3949a39e0bd041d16. Mar 19 11:54:34.159426 systemd[1]: cri-containerd-fa803451b18d4b5b71ea9656bf73fc8a4e4f4de5021cfdf3949a39e0bd041d16.scope: Deactivated successfully. Mar 19 11:54:34.166882 containerd[1740]: time="2025-03-19T11:54:34.166814891Z" level=info msg="StartContainer for \"fa803451b18d4b5b71ea9656bf73fc8a4e4f4de5021cfdf3949a39e0bd041d16\" returns successfully" Mar 19 11:54:34.196978 containerd[1740]: time="2025-03-19T11:54:34.196854296Z" level=info msg="shim disconnected" id=fa803451b18d4b5b71ea9656bf73fc8a4e4f4de5021cfdf3949a39e0bd041d16 namespace=k8s.io Mar 19 11:54:34.196978 containerd[1740]: time="2025-03-19T11:54:34.196916296Z" level=warning msg="cleaning up after shim disconnected" id=fa803451b18d4b5b71ea9656bf73fc8a4e4f4de5021cfdf3949a39e0bd041d16 namespace=k8s.io Mar 19 11:54:34.196978 containerd[1740]: time="2025-03-19T11:54:34.196924376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:54:34.224658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa803451b18d4b5b71ea9656bf73fc8a4e4f4de5021cfdf3949a39e0bd041d16-rootfs.mount: Deactivated successfully. Mar 19 11:54:35.068997 containerd[1740]: time="2025-03-19T11:54:35.068692188Z" level=info msg="CreateContainer within sandbox \"b56d2de153809d3f78a913aa184b7cbc2934f242a84e25068a317ef0e4f5ee08\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 19 11:54:35.120772 containerd[1740]: time="2025-03-19T11:54:35.120700065Z" level=info msg="CreateContainer within sandbox \"b56d2de153809d3f78a913aa184b7cbc2934f242a84e25068a317ef0e4f5ee08\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0c8b2ce44def3672ef15ab841a44ca8535398a4eaf99ea494f68386fd0ae7396\"" Mar 19 11:54:35.122009 containerd[1740]: time="2025-03-19T11:54:35.121955587Z" level=info msg="StartContainer for \"0c8b2ce44def3672ef15ab841a44ca8535398a4eaf99ea494f68386fd0ae7396\"" Mar 19 11:54:35.154967 systemd[1]: Started cri-containerd-0c8b2ce44def3672ef15ab841a44ca8535398a4eaf99ea494f68386fd0ae7396.scope - libcontainer container 0c8b2ce44def3672ef15ab841a44ca8535398a4eaf99ea494f68386fd0ae7396. Mar 19 11:54:35.190108 containerd[1740]: time="2025-03-19T11:54:35.189979049Z" level=info msg="StartContainer for \"0c8b2ce44def3672ef15ab841a44ca8535398a4eaf99ea494f68386fd0ae7396\" returns successfully" Mar 19 11:54:35.601931 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 19 11:54:37.068006 systemd[1]: run-containerd-runc-k8s.io-0c8b2ce44def3672ef15ab841a44ca8535398a4eaf99ea494f68386fd0ae7396-runc.9EVbc0.mount: Deactivated successfully. Mar 19 11:54:38.416118 systemd-networkd[1342]: lxc_health: Link UP Mar 19 11:54:38.420020 systemd-networkd[1342]: lxc_health: Gained carrier Mar 19 11:54:39.206404 systemd[1]: run-containerd-runc-k8s.io-0c8b2ce44def3672ef15ab841a44ca8535398a4eaf99ea494f68386fd0ae7396-runc.gqcTBo.mount: Deactivated successfully. Mar 19 11:54:39.397550 kubelet[3365]: I0319 11:54:39.397467 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d6xhh" podStartSLOduration=8.397449847 podStartE2EDuration="8.397449847s" podCreationTimestamp="2025-03-19 11:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:54:36.09554548 +0000 UTC m=+168.525525507" watchObservedRunningTime="2025-03-19 11:54:39.397449847 +0000 UTC m=+171.827429874" Mar 19 11:54:40.086968 systemd-networkd[1342]: lxc_health: Gained IPv6LL Mar 19 11:54:43.665609 sshd[5327]: Connection closed by 10.200.16.10 port 52650 Mar 19 11:54:43.665151 sshd-session[5283]: pam_unix(sshd:session): session closed for user core Mar 19 11:54:43.668796 systemd[1]: sshd@25-10.200.20.11:22-10.200.16.10:52650.service: Deactivated successfully. Mar 19 11:54:43.670477 systemd[1]: session-28.scope: Deactivated successfully. Mar 19 11:54:43.671199 systemd-logind[1713]: Session 28 logged out. Waiting for processes to exit. Mar 19 11:54:43.672502 systemd-logind[1713]: Removed session 28. Mar 19 11:54:47.690092 containerd[1740]: time="2025-03-19T11:54:47.689980244Z" level=info msg="StopPodSandbox for \"cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e\"" Mar 19 11:54:47.690092 containerd[1740]: time="2025-03-19T11:54:47.690070564Z" level=info msg="TearDown network for sandbox \"cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e\" successfully" Mar 19 11:54:47.690092 containerd[1740]: time="2025-03-19T11:54:47.690080404Z" level=info msg="StopPodSandbox for \"cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e\" returns successfully" Mar 19 11:54:47.691009 containerd[1740]: time="2025-03-19T11:54:47.690982926Z" level=info msg="RemovePodSandbox for \"cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e\"" Mar 19 11:54:47.691093 containerd[1740]: time="2025-03-19T11:54:47.691014366Z" level=info msg="Forcibly stopping sandbox \"cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e\"" Mar 19 11:54:47.691093 containerd[1740]: time="2025-03-19T11:54:47.691059966Z" level=info msg="TearDown network for sandbox \"cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e\" successfully" Mar 19 11:54:47.698975 containerd[1740]: time="2025-03-19T11:54:47.698808897Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:54:47.698975 containerd[1740]: time="2025-03-19T11:54:47.698864697Z" level=info msg="RemovePodSandbox \"cad24cad30219bb0a711b3f56930df8fa667bd9a56a72c89bc76e8bc0480251e\" returns successfully" Mar 19 11:54:47.699570 containerd[1740]: time="2025-03-19T11:54:47.699541818Z" level=info msg="StopPodSandbox for \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\"" Mar 19 11:54:47.699663 containerd[1740]: time="2025-03-19T11:54:47.699631938Z" level=info msg="TearDown network for sandbox \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\" successfully" Mar 19 11:54:47.699663 containerd[1740]: time="2025-03-19T11:54:47.699646778Z" level=info msg="StopPodSandbox for \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\" returns successfully" Mar 19 11:54:47.701384 containerd[1740]: time="2025-03-19T11:54:47.699962339Z" level=info msg="RemovePodSandbox for \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\"" Mar 19 11:54:47.701384 containerd[1740]: time="2025-03-19T11:54:47.700211619Z" level=info msg="Forcibly stopping sandbox \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\"" Mar 19 11:54:47.701384 containerd[1740]: time="2025-03-19T11:54:47.700264779Z" level=info msg="TearDown network for sandbox \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\" successfully" Mar 19 11:54:47.708608 containerd[1740]: time="2025-03-19T11:54:47.708550391Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:54:47.708741 containerd[1740]: time="2025-03-19T11:54:47.708617672Z" level=info msg="RemovePodSandbox \"e62735f811097882ee6c07968e424cab858d0ac3f609c4d56e6fa3f02b459b3e\" returns successfully"