Mar 19 11:35:50.329904 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 19 11:35:50.329927 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Mar 19 10:15:40 -00 2025 Mar 19 11:35:50.329935 kernel: KASLR enabled Mar 19 11:35:50.329941 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 19 11:35:50.329948 kernel: printk: bootconsole [pl11] enabled Mar 19 11:35:50.329954 kernel: efi: EFI v2.7 by EDK II Mar 19 11:35:50.329960 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3e9dc698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Mar 19 11:35:50.329966 kernel: random: crng init done Mar 19 11:35:50.329972 kernel: secureboot: Secure boot disabled Mar 19 11:35:50.329978 kernel: ACPI: Early table checksum verification disabled Mar 19 11:35:50.329983 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 19 11:35:50.329989 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:35:50.329995 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:35:50.330002 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 19 11:35:50.330009 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:35:50.330016 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:35:50.330022 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:35:50.330029 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:35:50.330036 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:35:50.330042 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:35:50.330048 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 19 11:35:50.330054 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:35:50.330060 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 19 11:35:50.330066 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 19 11:35:50.330072 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 19 11:35:50.330078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 19 11:35:50.330084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 19 11:35:50.330090 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 19 11:35:50.330098 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 19 11:35:50.330104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 19 11:35:50.330110 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 19 11:35:50.330116 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 19 11:35:50.330122 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 19 11:35:50.330129 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 19 11:35:50.330135 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 19 11:35:50.330141 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 19 11:35:50.330147 kernel: Zone ranges: Mar 19 11:35:50.330153 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 19 11:35:50.330158 kernel: DMA32 empty Mar 19 11:35:50.330257 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 19 11:35:50.330269 kernel: Movable zone start for each node Mar 19 11:35:50.330275 kernel: Early memory node ranges Mar 19 11:35:50.330281 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 19 11:35:50.330288 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Mar 19 11:35:50.330294 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Mar 19 11:35:50.330302 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Mar 19 11:35:50.330308 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 19 11:35:50.330315 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 19 11:35:50.330321 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 19 11:35:50.330327 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 19 11:35:50.330334 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 19 11:35:50.330340 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 19 11:35:50.330347 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 19 11:35:50.330353 kernel: psci: probing for conduit method from ACPI. Mar 19 11:35:50.330359 kernel: psci: PSCIv1.1 detected in firmware. Mar 19 11:35:50.330366 kernel: psci: Using standard PSCI v0.2 function IDs Mar 19 11:35:50.330372 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 19 11:35:50.330380 kernel: psci: SMC Calling Convention v1.4 Mar 19 11:35:50.330387 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 19 11:35:50.330393 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 19 11:35:50.330400 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 19 11:35:50.330406 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 19 11:35:50.330412 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 19 11:35:50.330419 kernel: Detected PIPT I-cache on CPU0 Mar 19 11:35:50.330425 kernel: CPU features: detected: GIC system register CPU interface Mar 19 11:35:50.330432 kernel: CPU features: detected: Hardware dirty bit management Mar 19 11:35:50.330438 kernel: CPU features: detected: Spectre-BHB Mar 19 11:35:50.330444 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 19 11:35:50.330453 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 19 11:35:50.330459 kernel: CPU features: detected: ARM erratum 1418040 Mar 19 11:35:50.330465 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 19 11:35:50.330472 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 19 11:35:50.330478 kernel: alternatives: applying boot alternatives Mar 19 11:35:50.330486 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=41cc5bdd62754423bbb4bbec0e0356d6a1ab5d0ac0f2396a30318f9fb189e7eb Mar 19 11:35:50.330493 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 19 11:35:50.330500 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 19 11:35:50.330506 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 19 11:35:50.330512 kernel: Fallback order for Node 0: 0 Mar 19 11:35:50.330519 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 19 11:35:50.330527 kernel: Policy zone: Normal Mar 19 11:35:50.330533 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 19 11:35:50.330540 kernel: software IO TLB: area num 2. Mar 19 11:35:50.330546 kernel: software IO TLB: mapped [mem 0x0000000036550000-0x000000003a550000] (64MB) Mar 19 11:35:50.330553 kernel: Memory: 3983656K/4194160K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 210504K reserved, 0K cma-reserved) Mar 19 11:35:50.330559 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 19 11:35:50.330566 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 19 11:35:50.330573 kernel: rcu: RCU event tracing is enabled. Mar 19 11:35:50.330579 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 19 11:35:50.330586 kernel: Trampoline variant of Tasks RCU enabled. Mar 19 11:35:50.330592 kernel: Tracing variant of Tasks RCU enabled. Mar 19 11:35:50.330601 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 19 11:35:50.330607 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 19 11:35:50.330613 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 19 11:35:50.330620 kernel: GICv3: 960 SPIs implemented Mar 19 11:35:50.330626 kernel: GICv3: 0 Extended SPIs implemented Mar 19 11:35:50.330632 kernel: Root IRQ handler: gic_handle_irq Mar 19 11:35:50.330639 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 19 11:35:50.330645 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 19 11:35:50.330651 kernel: ITS: No ITS available, not enabling LPIs Mar 19 11:35:50.330658 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 19 11:35:50.330664 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:35:50.330671 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 19 11:35:50.330679 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 19 11:35:50.330685 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 19 11:35:50.330692 kernel: Console: colour dummy device 80x25 Mar 19 11:35:50.330698 kernel: printk: console [tty1] enabled Mar 19 11:35:50.330705 kernel: ACPI: Core revision 20230628 Mar 19 11:35:50.330712 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 19 11:35:50.330719 kernel: pid_max: default: 32768 minimum: 301 Mar 19 11:35:50.330725 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 19 11:35:50.330732 kernel: landlock: Up and running. Mar 19 11:35:50.330740 kernel: SELinux: Initializing. Mar 19 11:35:50.330747 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:35:50.330754 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:35:50.330761 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 11:35:50.330767 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 11:35:50.330774 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Mar 19 11:35:50.330781 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Mar 19 11:35:50.330795 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 19 11:35:50.330801 kernel: rcu: Hierarchical SRCU implementation. Mar 19 11:35:50.330808 kernel: rcu: Max phase no-delay instances is 400. Mar 19 11:35:50.330815 kernel: Remapping and enabling EFI services. Mar 19 11:35:50.330822 kernel: smp: Bringing up secondary CPUs ... Mar 19 11:35:50.330830 kernel: Detected PIPT I-cache on CPU1 Mar 19 11:35:50.330837 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 19 11:35:50.330844 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:35:50.330851 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 19 11:35:50.330858 kernel: smp: Brought up 1 node, 2 CPUs Mar 19 11:35:50.330867 kernel: SMP: Total of 2 processors activated. Mar 19 11:35:50.330874 kernel: CPU features: detected: 32-bit EL0 Support Mar 19 11:35:50.330881 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 19 11:35:50.330888 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 19 11:35:50.330895 kernel: CPU features: detected: CRC32 instructions Mar 19 11:35:50.330902 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 19 11:35:50.330909 kernel: CPU features: detected: LSE atomic instructions Mar 19 11:35:50.330916 kernel: CPU features: detected: Privileged Access Never Mar 19 11:35:50.330923 kernel: CPU: All CPU(s) started at EL1 Mar 19 11:35:50.330931 kernel: alternatives: applying system-wide alternatives Mar 19 11:35:50.330938 kernel: devtmpfs: initialized Mar 19 11:35:50.330945 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 19 11:35:50.330952 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 19 11:35:50.330959 kernel: pinctrl core: initialized pinctrl subsystem Mar 19 11:35:50.330966 kernel: SMBIOS 3.1.0 present. Mar 19 11:35:50.330973 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 19 11:35:50.330980 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 19 11:35:50.330988 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 19 11:35:50.330996 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 19 11:35:50.331003 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 19 11:35:50.331010 kernel: audit: initializing netlink subsys (disabled) Mar 19 11:35:50.331017 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 19 11:35:50.331024 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 19 11:35:50.331031 kernel: cpuidle: using governor menu Mar 19 11:35:50.331038 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 19 11:35:50.331046 kernel: ASID allocator initialised with 32768 entries Mar 19 11:35:50.331053 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 19 11:35:50.331061 kernel: Serial: AMBA PL011 UART driver Mar 19 11:35:50.331068 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 19 11:35:50.331075 kernel: Modules: 0 pages in range for non-PLT usage Mar 19 11:35:50.331082 kernel: Modules: 509280 pages in range for PLT usage Mar 19 11:35:50.331089 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 19 11:35:50.331096 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 19 11:35:50.331103 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 19 11:35:50.331110 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 19 11:35:50.331117 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 19 11:35:50.331126 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 19 11:35:50.331133 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 19 11:35:50.331139 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 19 11:35:50.331147 kernel: ACPI: Added _OSI(Module Device) Mar 19 11:35:50.331153 kernel: ACPI: Added _OSI(Processor Device) Mar 19 11:35:50.331167 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 19 11:35:50.331175 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 19 11:35:50.331182 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 19 11:35:50.331189 kernel: ACPI: Interpreter enabled Mar 19 11:35:50.331197 kernel: ACPI: Using GIC for interrupt routing Mar 19 11:35:50.331204 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 19 11:35:50.331212 kernel: printk: console [ttyAMA0] enabled Mar 19 11:35:50.331218 kernel: printk: bootconsole [pl11] disabled Mar 19 11:35:50.331226 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 19 11:35:50.331233 kernel: iommu: Default domain type: Translated Mar 19 11:35:50.331239 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 19 11:35:50.331246 kernel: efivars: Registered efivars operations Mar 19 11:35:50.331253 kernel: vgaarb: loaded Mar 19 11:35:50.331262 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 19 11:35:50.331269 kernel: VFS: Disk quotas dquot_6.6.0 Mar 19 11:35:50.331276 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 19 11:35:50.331283 kernel: pnp: PnP ACPI init Mar 19 11:35:50.331290 kernel: pnp: PnP ACPI: found 0 devices Mar 19 11:35:50.331297 kernel: NET: Registered PF_INET protocol family Mar 19 11:35:50.331304 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 19 11:35:50.331311 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 19 11:35:50.331318 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 19 11:35:50.331327 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 19 11:35:50.331335 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 19 11:35:50.331342 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 19 11:35:50.331349 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:35:50.331356 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:35:50.331363 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 19 11:35:50.331370 kernel: PCI: CLS 0 bytes, default 64 Mar 19 11:35:50.331377 kernel: kvm [1]: HYP mode not available Mar 19 11:35:50.331384 kernel: Initialise system trusted keyrings Mar 19 11:35:50.331393 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 19 11:35:50.331400 kernel: Key type asymmetric registered Mar 19 11:35:50.331407 kernel: Asymmetric key parser 'x509' registered Mar 19 11:35:50.331414 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 19 11:35:50.331421 kernel: io scheduler mq-deadline registered Mar 19 11:35:50.331428 kernel: io scheduler kyber registered Mar 19 11:35:50.331434 kernel: io scheduler bfq registered Mar 19 11:35:50.331441 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 19 11:35:50.331448 kernel: thunder_xcv, ver 1.0 Mar 19 11:35:50.331457 kernel: thunder_bgx, ver 1.0 Mar 19 11:35:50.331464 kernel: nicpf, ver 1.0 Mar 19 11:35:50.331471 kernel: nicvf, ver 1.0 Mar 19 11:35:50.331623 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 19 11:35:50.331694 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-19T11:35:49 UTC (1742384149) Mar 19 11:35:50.331704 kernel: efifb: probing for efifb Mar 19 11:35:50.331711 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 19 11:35:50.331718 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 19 11:35:50.331727 kernel: efifb: scrolling: redraw Mar 19 11:35:50.331734 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 19 11:35:50.331741 kernel: Console: switching to colour frame buffer device 128x48 Mar 19 11:35:50.331748 kernel: fb0: EFI VGA frame buffer device Mar 19 11:35:50.331755 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 19 11:35:50.331762 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 19 11:35:50.331769 kernel: No ACPI PMU IRQ for CPU0 Mar 19 11:35:50.331776 kernel: No ACPI PMU IRQ for CPU1 Mar 19 11:35:50.331783 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Mar 19 11:35:50.331792 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 19 11:35:50.331799 kernel: watchdog: Hard watchdog permanently disabled Mar 19 11:35:50.331806 kernel: NET: Registered PF_INET6 protocol family Mar 19 11:35:50.331813 kernel: Segment Routing with IPv6 Mar 19 11:35:50.331820 kernel: In-situ OAM (IOAM) with IPv6 Mar 19 11:35:50.331827 kernel: NET: Registered PF_PACKET protocol family Mar 19 11:35:50.331834 kernel: Key type dns_resolver registered Mar 19 11:35:50.331841 kernel: registered taskstats version 1 Mar 19 11:35:50.331848 kernel: Loading compiled-in X.509 certificates Mar 19 11:35:50.331858 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 36392d496708ee63c4af5364493015d5256162ff' Mar 19 11:35:50.331865 kernel: Key type .fscrypt registered Mar 19 11:35:50.331872 kernel: Key type fscrypt-provisioning registered Mar 19 11:35:50.331879 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 19 11:35:50.331886 kernel: ima: Allocated hash algorithm: sha1 Mar 19 11:35:50.331893 kernel: ima: No architecture policies found Mar 19 11:35:50.331900 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 19 11:35:50.331907 kernel: clk: Disabling unused clocks Mar 19 11:35:50.331914 kernel: Freeing unused kernel memory: 38336K Mar 19 11:35:50.331922 kernel: Run /init as init process Mar 19 11:35:50.331929 kernel: with arguments: Mar 19 11:35:50.331936 kernel: /init Mar 19 11:35:50.331943 kernel: with environment: Mar 19 11:35:50.331950 kernel: HOME=/ Mar 19 11:35:50.331957 kernel: TERM=linux Mar 19 11:35:50.331964 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 19 11:35:50.331972 systemd[1]: Successfully made /usr/ read-only. Mar 19 11:35:50.331983 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:35:50.331991 systemd[1]: Detected virtualization microsoft. Mar 19 11:35:50.331999 systemd[1]: Detected architecture arm64. Mar 19 11:35:50.332006 systemd[1]: Running in initrd. Mar 19 11:35:50.332013 systemd[1]: No hostname configured, using default hostname. Mar 19 11:35:50.332021 systemd[1]: Hostname set to . Mar 19 11:35:50.332029 systemd[1]: Initializing machine ID from random generator. Mar 19 11:35:50.332036 systemd[1]: Queued start job for default target initrd.target. Mar 19 11:35:50.332046 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:35:50.332053 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:35:50.332061 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 19 11:35:50.332069 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:35:50.332077 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 19 11:35:50.332085 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 19 11:35:50.332093 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 19 11:35:50.332103 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 19 11:35:50.332111 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:35:50.332119 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:35:50.332126 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:35:50.332134 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:35:50.332141 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:35:50.332149 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:35:50.332156 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:35:50.332178 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:35:50.332185 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 19 11:35:50.332193 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 19 11:35:50.332200 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:35:50.332208 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:35:50.332216 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:35:50.332223 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:35:50.332231 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 19 11:35:50.332239 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:35:50.332248 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 19 11:35:50.332255 systemd[1]: Starting systemd-fsck-usr.service... Mar 19 11:35:50.332263 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:35:50.332271 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:35:50.332297 systemd-journald[218]: Collecting audit messages is disabled. Mar 19 11:35:50.332319 systemd-journald[218]: Journal started Mar 19 11:35:50.332337 systemd-journald[218]: Runtime Journal (/run/log/journal/6d4b547252cb4d4199cbf80350759ade) is 8M, max 78.5M, 70.5M free. Mar 19 11:35:50.338629 systemd-modules-load[220]: Inserted module 'overlay' Mar 19 11:35:50.364267 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 19 11:35:50.371788 systemd-modules-load[220]: Inserted module 'br_netfilter' Mar 19 11:35:50.387070 kernel: Bridge firewalling registered Mar 19 11:35:50.387112 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:35:50.406535 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:35:50.408231 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 19 11:35:50.414883 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:35:50.427308 systemd[1]: Finished systemd-fsck-usr.service. Mar 19 11:35:50.441155 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:35:50.448704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:35:50.472432 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:35:50.488622 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:35:50.507387 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:35:50.528681 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:35:50.538683 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:35:50.555424 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:35:50.576869 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:35:50.584361 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:35:50.611717 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 19 11:35:50.624379 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:35:50.633395 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:35:50.658670 dracut-cmdline[252]: dracut-dracut-053 Mar 19 11:35:50.665430 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=41cc5bdd62754423bbb4bbec0e0356d6a1ab5d0ac0f2396a30318f9fb189e7eb Mar 19 11:35:50.670013 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:35:50.733021 systemd-resolved[255]: Positive Trust Anchors: Mar 19 11:35:50.733044 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:35:50.733076 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:35:50.735485 systemd-resolved[255]: Defaulting to hostname 'linux'. Mar 19 11:35:50.738057 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:35:50.746336 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:35:50.864200 kernel: SCSI subsystem initialized Mar 19 11:35:50.872204 kernel: Loading iSCSI transport class v2.0-870. Mar 19 11:35:50.883189 kernel: iscsi: registered transport (tcp) Mar 19 11:35:50.901382 kernel: iscsi: registered transport (qla4xxx) Mar 19 11:35:50.901436 kernel: QLogic iSCSI HBA Driver Mar 19 11:35:50.942279 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 19 11:35:50.960443 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 19 11:35:50.996403 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 19 11:35:50.996462 kernel: device-mapper: uevent: version 1.0.3 Mar 19 11:35:51.004069 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 19 11:35:51.055211 kernel: raid6: neonx8 gen() 15741 MB/s Mar 19 11:35:51.075181 kernel: raid6: neonx4 gen() 15802 MB/s Mar 19 11:35:51.095193 kernel: raid6: neonx2 gen() 13190 MB/s Mar 19 11:35:51.116181 kernel: raid6: neonx1 gen() 10548 MB/s Mar 19 11:35:51.136180 kernel: raid6: int64x8 gen() 6788 MB/s Mar 19 11:35:51.156205 kernel: raid6: int64x4 gen() 7352 MB/s Mar 19 11:35:51.177209 kernel: raid6: int64x2 gen() 6112 MB/s Mar 19 11:35:51.200419 kernel: raid6: int64x1 gen() 5059 MB/s Mar 19 11:35:51.200462 kernel: raid6: using algorithm neonx4 gen() 15802 MB/s Mar 19 11:35:51.224383 kernel: raid6: .... xor() 12480 MB/s, rmw enabled Mar 19 11:35:51.224402 kernel: raid6: using neon recovery algorithm Mar 19 11:35:51.236148 kernel: xor: measuring software checksum speed Mar 19 11:35:51.236205 kernel: 8regs : 21584 MB/sec Mar 19 11:35:51.240796 kernel: 32regs : 21653 MB/sec Mar 19 11:35:51.244664 kernel: arm64_neon : 27889 MB/sec Mar 19 11:35:51.249067 kernel: xor: using function: arm64_neon (27889 MB/sec) Mar 19 11:35:51.299190 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 19 11:35:51.310957 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:35:51.326377 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:35:51.353026 systemd-udevd[439]: Using default interface naming scheme 'v255'. Mar 19 11:35:51.359326 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:35:51.381416 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 19 11:35:51.408209 dracut-pre-trigger[454]: rd.md=0: removing MD RAID activation Mar 19 11:35:51.447311 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:35:51.471481 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:35:51.514852 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:35:51.535360 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 19 11:35:51.565611 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 19 11:35:51.581792 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:35:51.598043 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:35:51.614803 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:35:51.632933 kernel: hv_vmbus: Vmbus version:5.3 Mar 19 11:35:51.634590 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 19 11:35:51.667726 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 19 11:35:51.667754 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 19 11:35:51.667764 kernel: hv_vmbus: registering driver hv_netvsc Mar 19 11:35:51.668091 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:35:51.689217 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:35:51.718330 kernel: hv_vmbus: registering driver hv_storvsc Mar 19 11:35:51.718353 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 19 11:35:51.718363 kernel: PTP clock support registered Mar 19 11:35:51.692389 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:35:51.718830 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:35:51.808555 kernel: scsi host1: storvsc_host_t Mar 19 11:35:51.808828 kernel: scsi host0: storvsc_host_t Mar 19 11:35:51.808924 kernel: hv_vmbus: registering driver hid_hyperv Mar 19 11:35:51.808934 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 19 11:35:51.808944 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 19 11:35:51.808957 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 19 11:35:51.808978 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 19 11:35:51.809064 kernel: hv_netvsc 002248b7-f795-0022-48b7-f795002248b7 eth0: VF slot 1 added Mar 19 11:35:51.809202 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 19 11:35:51.734761 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:35:51.734952 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:35:51.787813 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:35:51.823496 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:35:51.841729 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:35:51.841819 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:35:51.856778 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:35:51.871550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:35:51.907880 kernel: hv_utils: Registering HyperV Utility Driver Mar 19 11:35:51.907955 kernel: hv_vmbus: registering driver hv_pci Mar 19 11:35:51.907968 kernel: hv_pci ef4c2f90-27c2-4aca-93d2-2a757c5ba3bd: PCI VMBus probing: Using version 0x10004 Mar 19 11:35:51.841722 kernel: hv_vmbus: registering driver hv_utils Mar 19 11:35:51.856415 kernel: hv_pci ef4c2f90-27c2-4aca-93d2-2a757c5ba3bd: PCI host bridge to bus 27c2:00 Mar 19 11:35:51.856567 kernel: pci_bus 27c2:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 19 11:35:51.856666 kernel: hv_utils: Heartbeat IC version 3.0 Mar 19 11:35:51.856674 kernel: pci_bus 27c2:00: No busn resource found for root bus, will use [bus 00-ff] Mar 19 11:35:51.856751 kernel: hv_utils: Shutdown IC version 3.2 Mar 19 11:35:51.856761 kernel: hv_utils: TimeSync IC version 4.0 Mar 19 11:35:51.856768 kernel: pci 27c2:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 19 11:35:51.856881 kernel: pci 27c2:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 19 11:35:51.856974 kernel: pci 27c2:00:02.0: enabling Extended Tags Mar 19 11:35:51.857064 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 19 11:35:51.857165 kernel: pci 27c2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 27c2:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 19 11:35:51.858726 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 19 11:35:51.858740 kernel: pci_bus 27c2:00: busn_res: [bus 00-ff] end is updated to 00 Mar 19 11:35:51.858863 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 19 11:35:51.858959 kernel: pci 27c2:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 19 11:35:51.859058 systemd-journald[218]: Time jumped backwards, rotating. Mar 19 11:35:51.919567 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:35:51.880456 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 19 11:35:51.920096 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 19 11:35:51.920230 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 19 11:35:51.920345 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 19 11:35:51.920432 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 19 11:35:51.920523 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 11:35:51.920533 kernel: mlx5_core 27c2:00:02.0: enabling device (0000 -> 0002) Mar 19 11:35:52.143343 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 19 11:35:52.143492 kernel: mlx5_core 27c2:00:02.0: firmware version: 16.30.1284 Mar 19 11:35:52.143597 kernel: hv_netvsc 002248b7-f795-0022-48b7-f795002248b7 eth0: VF registering: eth1 Mar 19 11:35:52.143696 kernel: mlx5_core 27c2:00:02.0 eth1: joined to eth0 Mar 19 11:35:52.143800 kernel: mlx5_core 27c2:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 19 11:35:51.770394 systemd-resolved[255]: Clock change detected. Flushing caches. Mar 19 11:35:51.779165 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:35:51.881837 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:35:52.174293 kernel: mlx5_core 27c2:00:02.0 enP10178s1: renamed from eth1 Mar 19 11:35:52.753629 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 19 11:35:52.906269 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (490) Mar 19 11:35:52.924783 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 19 11:35:52.980536 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 19 11:35:53.118616 kernel: BTRFS: device fsid 7c80927c-98c3-4e81-a933-b7f5e1234bd2 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (498) Mar 19 11:35:53.135622 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 19 11:35:53.142486 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 19 11:35:53.175510 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 19 11:35:53.202327 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 11:35:53.210256 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 11:35:54.219748 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 11:35:54.219797 disk-uuid[611]: The operation has completed successfully. Mar 19 11:35:54.299094 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 19 11:35:54.299198 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 19 11:35:54.341421 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 19 11:35:54.354787 sh[697]: Success Mar 19 11:35:54.393292 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 19 11:35:54.743135 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 19 11:35:54.752425 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 19 11:35:54.773293 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 19 11:35:54.800934 kernel: BTRFS info (device dm-0): first mount of filesystem 7c80927c-98c3-4e81-a933-b7f5e1234bd2 Mar 19 11:35:54.801002 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:35:54.807790 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 19 11:35:54.813049 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 19 11:35:54.817463 kernel: BTRFS info (device dm-0): using free space tree Mar 19 11:35:55.447588 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 19 11:35:55.454199 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 19 11:35:55.482759 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 19 11:35:55.493770 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 19 11:35:55.539329 kernel: BTRFS info (device sda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:35:55.539415 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:35:55.544562 kernel: BTRFS info (device sda6): using free space tree Mar 19 11:35:55.561310 kernel: BTRFS info (device sda6): auto enabling async discard Mar 19 11:35:55.570072 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 19 11:35:55.584260 kernel: BTRFS info (device sda6): last unmount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:35:55.592671 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 19 11:35:55.612944 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 19 11:35:55.658697 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:35:55.678464 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:35:55.711513 systemd-networkd[882]: lo: Link UP Mar 19 11:35:55.711526 systemd-networkd[882]: lo: Gained carrier Mar 19 11:35:55.713226 systemd-networkd[882]: Enumeration completed Mar 19 11:35:55.716855 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:35:55.717154 systemd-networkd[882]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:35:55.717158 systemd-networkd[882]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:35:55.724102 systemd[1]: Reached target network.target - Network. Mar 19 11:35:55.815270 kernel: mlx5_core 27c2:00:02.0 enP10178s1: Link up Mar 19 11:35:55.864427 kernel: hv_netvsc 002248b7-f795-0022-48b7-f795002248b7 eth0: Data path switched to VF: enP10178s1 Mar 19 11:35:55.864801 systemd-networkd[882]: enP10178s1: Link UP Mar 19 11:35:55.864886 systemd-networkd[882]: eth0: Link UP Mar 19 11:35:55.865032 systemd-networkd[882]: eth0: Gained carrier Mar 19 11:35:55.865041 systemd-networkd[882]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:35:55.892046 systemd-networkd[882]: enP10178s1: Gained carrier Mar 19 11:35:55.907307 systemd-networkd[882]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 19 11:35:56.962138 ignition[831]: Ignition 2.20.0 Mar 19 11:35:56.966280 ignition[831]: Stage: fetch-offline Mar 19 11:35:56.966352 ignition[831]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:35:56.971554 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:35:56.966361 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:35:56.966485 ignition[831]: parsed url from cmdline: "" Mar 19 11:35:56.966489 ignition[831]: no config URL provided Mar 19 11:35:56.966494 ignition[831]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 11:35:56.966501 ignition[831]: no config at "/usr/lib/ignition/user.ign" Mar 19 11:35:57.002578 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 19 11:35:56.966507 ignition[831]: failed to fetch config: resource requires networking Mar 19 11:35:56.966718 ignition[831]: Ignition finished successfully Mar 19 11:35:57.036260 ignition[892]: Ignition 2.20.0 Mar 19 11:35:57.036272 ignition[892]: Stage: fetch Mar 19 11:35:57.036445 ignition[892]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:35:57.036454 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:35:57.036553 ignition[892]: parsed url from cmdline: "" Mar 19 11:35:57.036557 ignition[892]: no config URL provided Mar 19 11:35:57.036563 ignition[892]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 11:35:57.036571 ignition[892]: no config at "/usr/lib/ignition/user.ign" Mar 19 11:35:57.036597 ignition[892]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 19 11:35:57.137312 ignition[892]: GET result: OK Mar 19 11:35:57.137416 ignition[892]: config has been read from IMDS userdata Mar 19 11:35:57.137458 ignition[892]: parsing config with SHA512: 4a3444a080552343f1ee3a89224805b0e9396a10736d7006041d9eaed1c443f8c53de0c027812a5548df409b8b4187e1e2691136cdb729b70b8823e4471ecbf5 Mar 19 11:35:57.141671 unknown[892]: fetched base config from "system" Mar 19 11:35:57.142039 ignition[892]: fetch: fetch complete Mar 19 11:35:57.141678 unknown[892]: fetched base config from "system" Mar 19 11:35:57.142044 ignition[892]: fetch: fetch passed Mar 19 11:35:57.141683 unknown[892]: fetched user config from "azure" Mar 19 11:35:57.142082 ignition[892]: Ignition finished successfully Mar 19 11:35:57.148025 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 19 11:35:57.173031 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 19 11:35:57.194387 ignition[898]: Ignition 2.20.0 Mar 19 11:35:57.199072 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 19 11:35:57.194394 ignition[898]: Stage: kargs Mar 19 11:35:57.219422 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 19 11:35:57.194580 ignition[898]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:35:57.194589 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:35:57.195576 ignition[898]: kargs: kargs passed Mar 19 11:35:57.252274 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 19 11:35:57.195628 ignition[898]: Ignition finished successfully Mar 19 11:35:57.264178 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 19 11:35:57.244187 ignition[904]: Ignition 2.20.0 Mar 19 11:35:57.276763 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 19 11:35:57.244194 ignition[904]: Stage: disks Mar 19 11:35:57.289490 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:35:57.244449 ignition[904]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:35:57.303301 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:35:57.244459 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:35:57.313397 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:35:57.245386 ignition[904]: disks: disks passed Mar 19 11:35:57.352507 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 19 11:35:57.245435 ignition[904]: Ignition finished successfully Mar 19 11:35:57.447975 systemd-fsck[913]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 19 11:35:57.457304 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 19 11:35:57.479449 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 19 11:35:57.545315 kernel: EXT4-fs (sda9): mounted filesystem 45bb9a4a-80dc-4ce4-9ca9-c4944d8ff0e6 r/w with ordered data mode. Quota mode: none. Mar 19 11:35:57.545719 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 19 11:35:57.554989 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 19 11:35:57.618350 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:35:57.627566 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 19 11:35:57.638432 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 19 11:35:57.664774 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 19 11:35:57.696823 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (924) Mar 19 11:35:57.696848 kernel: BTRFS info (device sda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:35:57.696869 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:35:57.664826 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:35:57.730427 kernel: BTRFS info (device sda6): using free space tree Mar 19 11:35:57.730454 kernel: BTRFS info (device sda6): auto enabling async discard Mar 19 11:35:57.673364 systemd-networkd[882]: enP10178s1: Gained IPv6LL Mar 19 11:35:57.675290 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 19 11:35:57.731515 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 19 11:35:57.741767 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:35:57.857372 systemd-networkd[882]: eth0: Gained IPv6LL Mar 19 11:35:58.477441 coreos-metadata[926]: Mar 19 11:35:58.477 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 19 11:35:58.489393 coreos-metadata[926]: Mar 19 11:35:58.489 INFO Fetch successful Mar 19 11:35:58.495090 coreos-metadata[926]: Mar 19 11:35:58.490 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 19 11:35:58.505999 coreos-metadata[926]: Mar 19 11:35:58.501 INFO Fetch successful Mar 19 11:35:58.529303 coreos-metadata[926]: Mar 19 11:35:58.529 INFO wrote hostname ci-4230.1.0-a-2247daed6b to /sysroot/etc/hostname Mar 19 11:35:58.538616 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 19 11:35:59.042384 initrd-setup-root[955]: cut: /sysroot/etc/passwd: No such file or directory Mar 19 11:35:59.130611 initrd-setup-root[962]: cut: /sysroot/etc/group: No such file or directory Mar 19 11:35:59.143902 initrd-setup-root[969]: cut: /sysroot/etc/shadow: No such file or directory Mar 19 11:35:59.154078 initrd-setup-root[976]: cut: /sysroot/etc/gshadow: No such file or directory Mar 19 11:36:00.503288 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 19 11:36:00.523493 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 19 11:36:00.538604 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 19 11:36:00.559863 kernel: BTRFS info (device sda6): last unmount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:36:00.556795 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 19 11:36:00.585372 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 19 11:36:00.594369 ignition[1048]: INFO : Ignition 2.20.0 Mar 19 11:36:00.594369 ignition[1048]: INFO : Stage: mount Mar 19 11:36:00.594369 ignition[1048]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:36:00.594369 ignition[1048]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:36:00.594369 ignition[1048]: INFO : mount: mount passed Mar 19 11:36:00.594369 ignition[1048]: INFO : Ignition finished successfully Mar 19 11:36:00.602239 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 19 11:36:00.619532 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 19 11:36:00.640492 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:36:00.679268 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1060) Mar 19 11:36:00.695478 kernel: BTRFS info (device sda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:36:00.695547 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:36:00.701024 kernel: BTRFS info (device sda6): using free space tree Mar 19 11:36:00.709278 kernel: BTRFS info (device sda6): auto enabling async discard Mar 19 11:36:00.711660 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:36:00.748289 ignition[1077]: INFO : Ignition 2.20.0 Mar 19 11:36:00.748289 ignition[1077]: INFO : Stage: files Mar 19 11:36:00.757802 ignition[1077]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:36:00.757802 ignition[1077]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:36:00.757802 ignition[1077]: DEBUG : files: compiled without relabeling support, skipping Mar 19 11:36:00.781909 ignition[1077]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 19 11:36:00.789761 ignition[1077]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 19 11:36:00.906869 ignition[1077]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 19 11:36:00.914378 ignition[1077]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 19 11:36:00.914378 ignition[1077]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 19 11:36:00.907298 unknown[1077]: wrote ssh authorized keys file for user: core Mar 19 11:36:00.939661 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 19 11:36:00.950143 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Mar 19 11:36:00.981968 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 19 11:36:01.083925 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 19 11:36:01.083925 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 19 11:36:01.105635 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 19 11:36:01.552751 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 19 11:36:01.625293 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 19 11:36:01.625293 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 19 11:36:01.625293 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 19 11:36:01.625293 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:36:01.625293 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:36:01.625293 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:36:01.625293 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:36:01.625293 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:36:01.625293 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:36:01.722550 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:36:01.722550 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:36:01.722550 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 19 11:36:01.722550 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 19 11:36:01.722550 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 19 11:36:01.722550 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Mar 19 11:36:02.024933 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 19 11:36:02.229027 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 19 11:36:02.229027 ignition[1077]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 19 11:36:02.250442 ignition[1077]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:36:02.250442 ignition[1077]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:36:02.250442 ignition[1077]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 19 11:36:02.250442 ignition[1077]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 19 11:36:02.250442 ignition[1077]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 19 11:36:02.308190 ignition[1077]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:36:02.308190 ignition[1077]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:36:02.308190 ignition[1077]: INFO : files: files passed Mar 19 11:36:02.308190 ignition[1077]: INFO : Ignition finished successfully Mar 19 11:36:02.276494 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 19 11:36:02.318569 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 19 11:36:02.337504 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 19 11:36:02.352378 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 19 11:36:02.384658 initrd-setup-root-after-ignition[1106]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:36:02.384658 initrd-setup-root-after-ignition[1106]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:36:02.352495 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 19 11:36:02.424833 initrd-setup-root-after-ignition[1110]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:36:02.385172 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:36:02.401458 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 19 11:36:02.449485 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 19 11:36:02.483499 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 19 11:36:02.483620 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 19 11:36:02.496063 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 19 11:36:02.508330 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 19 11:36:02.519977 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 19 11:36:02.534512 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 19 11:36:02.558632 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:36:02.574547 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 19 11:36:02.596339 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 19 11:36:02.596474 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 19 11:36:02.608910 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:36:02.621518 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:36:02.634119 systemd[1]: Stopped target timers.target - Timer Units. Mar 19 11:36:02.645161 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 19 11:36:02.645259 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:36:02.668830 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 19 11:36:02.682912 systemd[1]: Stopped target basic.target - Basic System. Mar 19 11:36:02.694584 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 19 11:36:02.706605 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:36:02.720555 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 19 11:36:02.735071 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 19 11:36:02.749227 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:36:02.762572 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 19 11:36:02.776406 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 19 11:36:02.789572 systemd[1]: Stopped target swap.target - Swaps. Mar 19 11:36:02.800451 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 19 11:36:02.800542 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:36:02.815973 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:36:02.822006 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:36:02.834715 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 19 11:36:02.840272 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:36:02.848420 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 19 11:36:02.848509 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 19 11:36:02.866573 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 19 11:36:02.866644 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:36:02.882081 systemd[1]: ignition-files.service: Deactivated successfully. Mar 19 11:36:02.882141 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 19 11:36:02.895021 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 19 11:36:02.895111 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 19 11:36:02.938472 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 19 11:36:02.954221 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 19 11:36:02.954345 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:36:02.992668 ignition[1131]: INFO : Ignition 2.20.0 Mar 19 11:36:02.992668 ignition[1131]: INFO : Stage: umount Mar 19 11:36:03.027168 ignition[1131]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:36:03.027168 ignition[1131]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:36:03.027168 ignition[1131]: INFO : umount: umount passed Mar 19 11:36:03.027168 ignition[1131]: INFO : Ignition finished successfully Mar 19 11:36:02.994562 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 19 11:36:03.005184 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 19 11:36:03.005277 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:36:03.019820 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 19 11:36:03.019899 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:36:03.034193 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 19 11:36:03.034821 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 19 11:36:03.036195 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 19 11:36:03.059905 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 19 11:36:03.060004 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 19 11:36:03.117765 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 19 11:36:03.117839 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 19 11:36:03.123825 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 19 11:36:03.123874 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 19 11:36:03.134553 systemd[1]: Stopped target network.target - Network. Mar 19 11:36:03.145340 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 19 11:36:03.145438 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:36:03.157408 systemd[1]: Stopped target paths.target - Path Units. Mar 19 11:36:03.169983 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 19 11:36:03.177060 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:36:03.185140 systemd[1]: Stopped target slices.target - Slice Units. Mar 19 11:36:03.191950 systemd[1]: Stopped target sockets.target - Socket Units. Mar 19 11:36:03.204343 systemd[1]: iscsid.socket: Deactivated successfully. Mar 19 11:36:03.204408 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:36:03.215666 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 19 11:36:03.215715 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:36:03.226183 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 19 11:36:03.226262 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 19 11:36:03.237144 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 19 11:36:03.237200 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 19 11:36:03.243652 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 19 11:36:03.254643 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 19 11:36:03.274315 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 19 11:36:03.274459 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 19 11:36:03.546338 kernel: hv_netvsc 002248b7-f795-0022-48b7-f795002248b7 eth0: Data path switched from VF: enP10178s1 Mar 19 11:36:03.284674 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 19 11:36:03.284728 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 19 11:36:03.291602 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 19 11:36:03.291729 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 19 11:36:03.319200 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 19 11:36:03.319785 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 19 11:36:03.322052 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 19 11:36:03.332168 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 19 11:36:03.333677 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 19 11:36:03.333742 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:36:03.355452 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 19 11:36:03.375092 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 19 11:36:03.375187 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:36:03.382821 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:36:03.382877 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:36:03.403902 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 19 11:36:03.403959 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 19 11:36:03.410281 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 19 11:36:03.410334 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:36:03.426818 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:36:03.436989 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 19 11:36:03.437072 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:36:03.471045 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 19 11:36:03.471198 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:36:03.479648 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 19 11:36:03.479721 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 19 11:36:03.490808 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 19 11:36:03.490858 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:36:03.502094 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 19 11:36:03.502162 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:36:03.529186 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 19 11:36:03.529283 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 19 11:36:03.824224 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Mar 19 11:36:03.546397 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:36:03.546474 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:36:03.589536 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 19 11:36:03.605400 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 19 11:36:03.605489 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:36:03.630197 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:36:03.630300 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:36:03.643745 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 19 11:36:03.643809 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:36:03.644149 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 19 11:36:03.644265 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 19 11:36:03.686991 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 19 11:36:03.687124 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 19 11:36:03.699492 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 19 11:36:03.735493 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 19 11:36:03.759262 systemd[1]: Switching root. Mar 19 11:36:03.939326 systemd-journald[218]: Journal stopped Mar 19 11:36:09.574914 kernel: SELinux: policy capability network_peer_controls=1 Mar 19 11:36:09.574937 kernel: SELinux: policy capability open_perms=1 Mar 19 11:36:09.574948 kernel: SELinux: policy capability extended_socket_class=1 Mar 19 11:36:09.574955 kernel: SELinux: policy capability always_check_network=0 Mar 19 11:36:09.574964 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 19 11:36:09.574972 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 19 11:36:09.574980 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 19 11:36:09.574988 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 19 11:36:09.574996 kernel: audit: type=1403 audit(1742384164.370:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 19 11:36:09.575006 systemd[1]: Successfully loaded SELinux policy in 87.430ms. Mar 19 11:36:09.575017 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.854ms. Mar 19 11:36:09.575027 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:36:09.575035 systemd[1]: Detected virtualization microsoft. Mar 19 11:36:09.575044 systemd[1]: Detected architecture arm64. Mar 19 11:36:09.575053 systemd[1]: Detected first boot. Mar 19 11:36:09.575063 systemd[1]: Hostname set to . Mar 19 11:36:09.575072 systemd[1]: Initializing machine ID from random generator. Mar 19 11:36:09.575081 zram_generator::config[1175]: No configuration found. Mar 19 11:36:09.575090 kernel: NET: Registered PF_VSOCK protocol family Mar 19 11:36:09.575098 systemd[1]: Populated /etc with preset unit settings. Mar 19 11:36:09.575108 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 19 11:36:09.575117 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 19 11:36:09.575127 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 19 11:36:09.575137 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 19 11:36:09.575146 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 19 11:36:09.575155 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 19 11:36:09.575164 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 19 11:36:09.575173 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 19 11:36:09.575182 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 19 11:36:09.575193 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 19 11:36:09.575202 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 19 11:36:09.575211 systemd[1]: Created slice user.slice - User and Session Slice. Mar 19 11:36:09.575220 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:36:09.575229 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:36:09.575251 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 19 11:36:09.575261 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 19 11:36:09.575270 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 19 11:36:09.575281 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:36:09.575290 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 19 11:36:09.575299 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:36:09.575310 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 19 11:36:09.575319 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 19 11:36:09.575329 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 19 11:36:09.575338 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 19 11:36:09.575348 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:36:09.575358 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:36:09.575367 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:36:09.575377 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:36:09.575386 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 19 11:36:09.575395 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 19 11:36:09.575404 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 19 11:36:09.575416 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:36:09.575425 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:36:09.575434 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:36:09.575443 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 19 11:36:09.575452 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 19 11:36:09.575462 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 19 11:36:09.575471 systemd[1]: Mounting media.mount - External Media Directory... Mar 19 11:36:09.575481 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 19 11:36:09.575491 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 19 11:36:09.575500 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 19 11:36:09.575510 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 19 11:36:09.575519 systemd[1]: Reached target machines.target - Containers. Mar 19 11:36:09.575528 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 19 11:36:09.575538 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:36:09.575548 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:36:09.575559 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 19 11:36:09.575568 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:36:09.575578 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:36:09.575587 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:36:09.575596 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 19 11:36:09.575605 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:36:09.575616 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 19 11:36:09.575625 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 19 11:36:09.575636 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 19 11:36:09.575645 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 19 11:36:09.575654 systemd[1]: Stopped systemd-fsck-usr.service. Mar 19 11:36:09.575663 kernel: loop: module loaded Mar 19 11:36:09.575672 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:36:09.575682 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:36:09.575690 kernel: ACPI: bus type drm_connector registered Mar 19 11:36:09.575699 kernel: fuse: init (API version 7.39) Mar 19 11:36:09.575707 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:36:09.575735 systemd-journald[1279]: Collecting audit messages is disabled. Mar 19 11:36:09.575756 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 19 11:36:09.575767 systemd-journald[1279]: Journal started Mar 19 11:36:09.575788 systemd-journald[1279]: Runtime Journal (/run/log/journal/2582e11f53f84623af08765314670a75) is 8M, max 78.5M, 70.5M free. Mar 19 11:36:08.463216 systemd[1]: Queued start job for default target multi-user.target. Mar 19 11:36:08.475164 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 19 11:36:08.475596 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 19 11:36:08.475952 systemd[1]: systemd-journald.service: Consumed 3.635s CPU time. Mar 19 11:36:09.607812 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 19 11:36:09.626053 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 19 11:36:09.649910 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:36:09.660460 systemd[1]: verity-setup.service: Deactivated successfully. Mar 19 11:36:09.660515 systemd[1]: Stopped verity-setup.service. Mar 19 11:36:09.681406 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:36:09.682418 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 19 11:36:09.689742 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 19 11:36:09.697674 systemd[1]: Mounted media.mount - External Media Directory. Mar 19 11:36:09.704827 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 19 11:36:09.712327 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 19 11:36:09.719578 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 19 11:36:09.727764 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 19 11:36:09.735445 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:36:09.743012 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 19 11:36:09.743193 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 19 11:36:09.750772 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:36:09.750948 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:36:09.758458 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:36:09.758633 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:36:09.765319 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:36:09.765491 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:36:09.773736 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 19 11:36:09.773895 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 19 11:36:09.781647 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:36:09.781821 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:36:09.789606 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:36:09.796954 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 19 11:36:09.805206 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 19 11:36:09.812988 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 19 11:36:09.820569 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:36:09.840270 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 19 11:36:09.852339 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 19 11:36:09.859612 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 19 11:36:09.866113 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 19 11:36:09.866160 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:36:09.872864 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 19 11:36:09.880906 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 19 11:36:09.888422 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 19 11:36:09.894135 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:36:09.937410 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 19 11:36:09.944333 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 19 11:36:09.951518 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:36:09.954480 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 19 11:36:09.963006 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:36:09.973395 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:36:09.989228 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 19 11:36:10.003480 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 19 11:36:10.011886 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 19 11:36:10.023702 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 19 11:36:10.030852 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 19 11:36:10.039290 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 19 11:36:10.047036 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 19 11:36:10.058997 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 19 11:36:10.071596 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 19 11:36:10.081523 udevadm[1318]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 19 11:36:10.083432 systemd-journald[1279]: Time spent on flushing to /var/log/journal/2582e11f53f84623af08765314670a75 is 47.105ms for 924 entries. Mar 19 11:36:10.083432 systemd-journald[1279]: System Journal (/var/log/journal/2582e11f53f84623af08765314670a75) is 11.8M, max 2.6G, 2.6G free. Mar 19 11:36:10.219328 systemd-journald[1279]: Received client request to flush runtime journal. Mar 19 11:36:10.219390 kernel: loop0: detected capacity change from 0 to 113512 Mar 19 11:36:10.219442 systemd-journald[1279]: /var/log/journal/2582e11f53f84623af08765314670a75/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Mar 19 11:36:10.219464 systemd-journald[1279]: Rotating system journal. Mar 19 11:36:10.104412 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:36:10.219094 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 19 11:36:10.221368 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 19 11:36:10.231003 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 19 11:36:10.784638 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 19 11:36:10.795520 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:36:10.831266 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 19 11:36:10.895277 kernel: loop1: detected capacity change from 0 to 28720 Mar 19 11:36:10.911166 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Mar 19 11:36:10.911181 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Mar 19 11:36:10.915699 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:36:11.589968 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 19 11:36:11.603437 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:36:11.626547 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Mar 19 11:36:11.775266 kernel: loop2: detected capacity change from 0 to 123192 Mar 19 11:36:11.980281 kernel: loop3: detected capacity change from 0 to 201592 Mar 19 11:36:12.015279 kernel: loop4: detected capacity change from 0 to 113512 Mar 19 11:36:12.026286 kernel: loop5: detected capacity change from 0 to 28720 Mar 19 11:36:12.037275 kernel: loop6: detected capacity change from 0 to 123192 Mar 19 11:36:12.047261 kernel: loop7: detected capacity change from 0 to 201592 Mar 19 11:36:12.053364 (sd-merge)[1343]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 19 11:36:12.053840 (sd-merge)[1343]: Merged extensions into '/usr'. Mar 19 11:36:12.056926 systemd[1]: Reload requested from client PID 1315 ('systemd-sysext') (unit systemd-sysext.service)... Mar 19 11:36:12.057222 systemd[1]: Reloading... Mar 19 11:36:12.136286 zram_generator::config[1371]: No configuration found. Mar 19 11:36:12.308171 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:36:12.373336 kernel: hv_vmbus: registering driver hv_balloon Mar 19 11:36:12.373434 kernel: mousedev: PS/2 mouse device common for all mice Mar 19 11:36:12.373451 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 19 11:36:12.379381 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 19 11:36:12.402254 kernel: hv_vmbus: registering driver hyperv_fb Mar 19 11:36:12.402353 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 19 11:36:12.402373 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 19 11:36:12.412253 kernel: Console: switching to colour dummy device 80x25 Mar 19 11:36:12.417735 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 19 11:36:12.417995 systemd[1]: Reloading finished in 360 ms. Mar 19 11:36:12.419708 kernel: Console: switching to colour frame buffer device 128x48 Mar 19 11:36:12.449948 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:36:12.463284 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 19 11:36:12.496470 systemd[1]: Starting ensure-sysext.service... Mar 19 11:36:12.514524 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:36:12.528257 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1425) Mar 19 11:36:12.539282 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:36:12.553816 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:36:12.583863 systemd-tmpfiles[1485]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 19 11:36:12.585007 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 19 11:36:12.586132 systemd-tmpfiles[1485]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 19 11:36:12.587975 systemd-tmpfiles[1485]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 19 11:36:12.588182 systemd-tmpfiles[1485]: ACLs are not supported, ignoring. Mar 19 11:36:12.588228 systemd-tmpfiles[1485]: ACLs are not supported, ignoring. Mar 19 11:36:12.593747 systemd[1]: Reload requested from client PID 1473 ('systemctl') (unit ensure-sysext.service)... Mar 19 11:36:12.593756 systemd[1]: Reloading... Mar 19 11:36:12.605726 systemd-tmpfiles[1485]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:36:12.606304 systemd-tmpfiles[1485]: Skipping /boot Mar 19 11:36:12.632732 systemd-tmpfiles[1485]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:36:12.633206 systemd-tmpfiles[1485]: Skipping /boot Mar 19 11:36:12.713449 zram_generator::config[1545]: No configuration found. Mar 19 11:36:12.861694 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:36:12.885921 systemd-networkd[1479]: lo: Link UP Mar 19 11:36:12.885931 systemd-networkd[1479]: lo: Gained carrier Mar 19 11:36:12.888475 systemd-networkd[1479]: Enumeration completed Mar 19 11:36:12.888908 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:36:12.888914 systemd-networkd[1479]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:36:12.937279 kernel: mlx5_core 27c2:00:02.0 enP10178s1: Link up Mar 19 11:36:12.957109 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 19 11:36:12.971782 systemd[1]: Reloading finished in 377 ms. Mar 19 11:36:12.973262 kernel: hv_netvsc 002248b7-f795-0022-48b7-f795002248b7 eth0: Data path switched to VF: enP10178s1 Mar 19 11:36:12.973741 systemd-networkd[1479]: enP10178s1: Link UP Mar 19 11:36:12.973850 systemd-networkd[1479]: eth0: Link UP Mar 19 11:36:12.973861 systemd-networkd[1479]: eth0: Gained carrier Mar 19 11:36:12.973878 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:36:12.978949 systemd-networkd[1479]: enP10178s1: Gained carrier Mar 19 11:36:12.983030 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 19 11:36:12.989751 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:36:13.000364 systemd-networkd[1479]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 19 11:36:13.012435 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:36:13.042304 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 19 11:36:13.060680 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:36:13.070700 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 19 11:36:13.078182 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:36:13.083666 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 19 11:36:13.095895 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:36:13.106538 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:36:13.120404 lvm[1629]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:36:13.127634 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:36:13.133504 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:36:13.136509 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 19 11:36:13.146606 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:36:13.151562 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 19 11:36:13.160536 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 19 11:36:13.169580 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 19 11:36:13.182364 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:36:13.207563 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 19 11:36:13.217620 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:36:13.217847 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:36:13.228759 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:36:13.229618 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 19 11:36:13.236934 augenrules[1659]: No rules Mar 19 11:36:13.237676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:36:13.237842 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:36:13.245715 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:36:13.246321 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:36:13.253209 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:36:13.253541 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:36:13.261172 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:36:13.261540 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:36:13.268991 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 19 11:36:13.277304 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 19 11:36:13.303296 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:36:13.309968 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:36:13.312620 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 19 11:36:13.325352 lvm[1673]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:36:13.325724 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:36:13.341555 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:36:13.352910 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:36:13.359489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:36:13.359636 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:36:13.361693 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:36:13.379293 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 19 11:36:13.390622 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 19 11:36:13.399864 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 19 11:36:13.412331 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:36:13.413003 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:36:13.421814 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:36:13.422126 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:36:13.430621 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:36:13.430802 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:36:13.437701 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:36:13.454106 systemd-resolved[1653]: Positive Trust Anchors: Mar 19 11:36:13.454133 systemd-resolved[1653]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:36:13.454164 systemd-resolved[1653]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:36:13.462663 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:36:13.464564 systemd-resolved[1653]: Using system hostname 'ci-4230.1.0-a-2247daed6b'. Mar 19 11:36:13.470837 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:36:13.473583 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:36:13.486359 augenrules[1689]: /sbin/augenrules: No change Mar 19 11:36:13.493512 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:36:13.500139 augenrules[1708]: No rules Mar 19 11:36:13.504638 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:36:13.513624 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:36:13.520089 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:36:13.520246 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:36:13.520401 systemd[1]: Reached target time-set.target - System Time Set. Mar 19 11:36:13.528105 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:36:13.536768 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:36:13.537016 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:36:13.543334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:36:13.543496 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:36:13.552053 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:36:13.552220 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:36:13.559305 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:36:13.560299 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:36:13.567779 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:36:13.567948 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:36:13.578320 systemd[1]: Finished ensure-sysext.service. Mar 19 11:36:13.588155 systemd[1]: Reached target network.target - Network. Mar 19 11:36:13.594053 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:36:13.601832 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:36:13.601902 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:36:13.938515 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 19 11:36:13.946340 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 19 11:36:14.241353 systemd-networkd[1479]: eth0: Gained IPv6LL Mar 19 11:36:14.241884 systemd-networkd[1479]: enP10178s1: Gained IPv6LL Mar 19 11:36:14.246966 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 19 11:36:14.254645 systemd[1]: Reached target network-online.target - Network is Online. Mar 19 11:36:20.274742 ldconfig[1310]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 19 11:36:20.293664 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 19 11:36:20.308728 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 19 11:36:20.323712 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 19 11:36:20.331850 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:36:20.339397 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 19 11:36:20.346730 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 19 11:36:20.354739 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 19 11:36:20.362413 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 19 11:36:20.371562 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 19 11:36:20.379481 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 19 11:36:20.379525 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:36:20.385333 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:36:20.407533 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 19 11:36:20.415899 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 19 11:36:20.424167 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 19 11:36:20.432573 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 19 11:36:20.440213 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 19 11:36:20.448843 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 19 11:36:20.456441 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 19 11:36:20.464382 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 19 11:36:20.471382 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:36:20.477019 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:36:20.482253 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:36:20.482287 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:36:20.491358 systemd[1]: Starting chronyd.service - NTP client/server... Mar 19 11:36:20.502576 systemd[1]: Starting containerd.service - containerd container runtime... Mar 19 11:36:20.512497 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 19 11:36:20.524019 (chronyd)[1728]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 19 11:36:20.524474 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 19 11:36:20.531489 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 19 11:36:20.541447 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 19 11:36:20.547347 jq[1735]: false Mar 19 11:36:20.547618 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 19 11:36:20.547660 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 19 11:36:20.551542 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 19 11:36:20.561630 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 19 11:36:20.562833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:36:20.571518 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 19 11:36:20.580515 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 19 11:36:20.600680 KVP[1737]: KVP starting; pid is:1737 Mar 19 11:36:20.605451 chronyd[1746]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 19 11:36:20.607365 kernel: hv_utils: KVP IC version 4.0 Mar 19 11:36:20.606142 KVP[1737]: KVP LIC Version: 3.1 Mar 19 11:36:20.607379 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 19 11:36:20.619526 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 19 11:36:20.630739 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 19 11:36:20.644607 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 19 11:36:20.654007 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 19 11:36:20.654869 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 19 11:36:20.656767 systemd[1]: Starting update-engine.service - Update Engine... Mar 19 11:36:20.665417 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 19 11:36:20.685189 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 19 11:36:20.686711 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 19 11:36:20.690351 systemd[1]: motdgen.service: Deactivated successfully. Mar 19 11:36:20.691482 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 19 11:36:20.700150 jq[1758]: true Mar 19 11:36:20.708214 chronyd[1746]: Timezone right/UTC failed leap second check, ignoring Mar 19 11:36:20.708505 chronyd[1746]: Loaded seccomp filter (level 2) Mar 19 11:36:20.708737 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 19 11:36:20.709029 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 19 11:36:20.722653 extend-filesystems[1736]: Found loop4 Mar 19 11:36:20.722653 extend-filesystems[1736]: Found loop5 Mar 19 11:36:20.722653 extend-filesystems[1736]: Found loop6 Mar 19 11:36:20.722653 extend-filesystems[1736]: Found loop7 Mar 19 11:36:20.722653 extend-filesystems[1736]: Found sda Mar 19 11:36:20.722653 extend-filesystems[1736]: Found sda1 Mar 19 11:36:20.722653 extend-filesystems[1736]: Found sda2 Mar 19 11:36:20.722125 systemd[1]: Started chronyd.service - NTP client/server. Mar 19 11:36:20.784744 extend-filesystems[1736]: Found sda3 Mar 19 11:36:20.784744 extend-filesystems[1736]: Found usr Mar 19 11:36:20.784744 extend-filesystems[1736]: Found sda4 Mar 19 11:36:20.784744 extend-filesystems[1736]: Found sda6 Mar 19 11:36:20.784744 extend-filesystems[1736]: Found sda7 Mar 19 11:36:20.784744 extend-filesystems[1736]: Found sda9 Mar 19 11:36:20.784744 extend-filesystems[1736]: Checking size of /dev/sda9 Mar 19 11:36:20.929437 jq[1764]: true Mar 19 11:36:20.793138 (ntainerd)[1767]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 19 11:36:20.949147 tar[1762]: linux-arm64/LICENSE Mar 19 11:36:20.949147 tar[1762]: linux-arm64/helm Mar 19 11:36:20.949415 extend-filesystems[1736]: Old size kept for /dev/sda9 Mar 19 11:36:20.949415 extend-filesystems[1736]: Found sr0 Mar 19 11:36:20.799751 dbus-daemon[1731]: [system] SELinux support is enabled Mar 19 11:36:20.983936 update_engine[1753]: I20250319 11:36:20.817548 1753 main.cc:92] Flatcar Update Engine starting Mar 19 11:36:20.983936 update_engine[1753]: I20250319 11:36:20.844070 1753 update_check_scheduler.cc:74] Next update check in 4m1s Mar 19 11:36:20.799986 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 19 11:36:20.823020 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 19 11:36:20.823053 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 19 11:36:20.835648 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 19 11:36:20.835670 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 19 11:36:20.845702 systemd[1]: Started update-engine.service - Update Engine. Mar 19 11:36:20.872709 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 19 11:36:20.891526 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 19 11:36:20.903116 systemd-logind[1750]: New seat seat0. Mar 19 11:36:20.905402 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 19 11:36:20.905659 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 19 11:36:20.910897 systemd-logind[1750]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 19 11:36:20.949038 systemd[1]: Started systemd-logind.service - User Login Management. Mar 19 11:36:21.041317 bash[1803]: Updated "/home/core/.ssh/authorized_keys" Mar 19 11:36:21.046016 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 19 11:36:21.070798 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1812) Mar 19 11:36:21.068014 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 19 11:36:21.078498 coreos-metadata[1730]: Mar 19 11:36:21.077 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 19 11:36:21.081632 coreos-metadata[1730]: Mar 19 11:36:21.081 INFO Fetch successful Mar 19 11:36:21.081732 coreos-metadata[1730]: Mar 19 11:36:21.081 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 19 11:36:21.086855 coreos-metadata[1730]: Mar 19 11:36:21.086 INFO Fetch successful Mar 19 11:36:21.086855 coreos-metadata[1730]: Mar 19 11:36:21.086 INFO Fetching http://168.63.129.16/machine/9252b3e8-27f4-47f6-a46f-e9bedebf5e0e/4bc066e6%2Dde7b%2D4e74%2Da01a%2D2c79dcfe8997.%5Fci%2D4230.1.0%2Da%2D2247daed6b?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 19 11:36:21.089131 coreos-metadata[1730]: Mar 19 11:36:21.089 INFO Fetch successful Mar 19 11:36:21.089131 coreos-metadata[1730]: Mar 19 11:36:21.089 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 19 11:36:21.109474 coreos-metadata[1730]: Mar 19 11:36:21.108 INFO Fetch successful Mar 19 11:36:21.157778 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 19 11:36:21.166963 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 19 11:36:21.477111 locksmithd[1786]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 19 11:36:21.653317 containerd[1767]: time="2025-03-19T11:36:21.652723220Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 19 11:36:21.697020 tar[1762]: linux-arm64/README.md Mar 19 11:36:21.715117 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 19 11:36:21.739169 containerd[1767]: time="2025-03-19T11:36:21.739048860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:36:21.741614 containerd[1767]: time="2025-03-19T11:36:21.741567620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:36:21.743666 containerd[1767]: time="2025-03-19T11:36:21.742286220Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 19 11:36:21.743666 containerd[1767]: time="2025-03-19T11:36:21.742323940Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 19 11:36:21.743666 containerd[1767]: time="2025-03-19T11:36:21.742490020Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 19 11:36:21.743666 containerd[1767]: time="2025-03-19T11:36:21.742507260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 19 11:36:21.743666 containerd[1767]: time="2025-03-19T11:36:21.742581620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:36:21.743666 containerd[1767]: time="2025-03-19T11:36:21.742596580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:36:21.743666 containerd[1767]: time="2025-03-19T11:36:21.742807220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:36:21.743666 containerd[1767]: time="2025-03-19T11:36:21.742821540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 19 11:36:21.743666 containerd[1767]: time="2025-03-19T11:36:21.742834860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:36:21.743666 containerd[1767]: time="2025-03-19T11:36:21.742843860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 19 11:36:21.743666 containerd[1767]: time="2025-03-19T11:36:21.742924700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:36:21.743666 containerd[1767]: time="2025-03-19T11:36:21.743115620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:36:21.744866 containerd[1767]: time="2025-03-19T11:36:21.744834140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:36:21.744960 containerd[1767]: time="2025-03-19T11:36:21.744946820Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 19 11:36:21.745773 containerd[1767]: time="2025-03-19T11:36:21.745737660Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 19 11:36:21.745919 containerd[1767]: time="2025-03-19T11:36:21.745903780Z" level=info msg="metadata content store policy set" policy=shared Mar 19 11:36:21.771325 containerd[1767]: time="2025-03-19T11:36:21.771277660Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 19 11:36:21.771507 containerd[1767]: time="2025-03-19T11:36:21.771494060Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 19 11:36:21.771747 containerd[1767]: time="2025-03-19T11:36:21.771719860Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 19 11:36:21.771829 containerd[1767]: time="2025-03-19T11:36:21.771815140Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 19 11:36:21.771890 containerd[1767]: time="2025-03-19T11:36:21.771879340Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 19 11:36:21.774945 containerd[1767]: time="2025-03-19T11:36:21.772386060Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 19 11:36:21.774945 containerd[1767]: time="2025-03-19T11:36:21.772668900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 19 11:36:21.774945 containerd[1767]: time="2025-03-19T11:36:21.772763500Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 19 11:36:21.774945 containerd[1767]: time="2025-03-19T11:36:21.772782020Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 19 11:36:21.774945 containerd[1767]: time="2025-03-19T11:36:21.772797900Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 19 11:36:21.774945 containerd[1767]: time="2025-03-19T11:36:21.772815980Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 19 11:36:21.774945 containerd[1767]: time="2025-03-19T11:36:21.772830660Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 19 11:36:21.774945 containerd[1767]: time="2025-03-19T11:36:21.772843820Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 19 11:36:21.774945 containerd[1767]: time="2025-03-19T11:36:21.772858580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 19 11:36:21.774945 containerd[1767]: time="2025-03-19T11:36:21.772874540Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 19 11:36:21.774945 containerd[1767]: time="2025-03-19T11:36:21.772888820Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 19 11:36:21.774945 containerd[1767]: time="2025-03-19T11:36:21.772901340Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 19 11:36:21.774945 containerd[1767]: time="2025-03-19T11:36:21.772915100Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 19 11:36:21.774945 containerd[1767]: time="2025-03-19T11:36:21.772936860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 19 11:36:21.775294 containerd[1767]: time="2025-03-19T11:36:21.772951380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 19 11:36:21.775294 containerd[1767]: time="2025-03-19T11:36:21.772964140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 19 11:36:21.775294 containerd[1767]: time="2025-03-19T11:36:21.772977580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 19 11:36:21.775294 containerd[1767]: time="2025-03-19T11:36:21.772989860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 19 11:36:21.775294 containerd[1767]: time="2025-03-19T11:36:21.773002860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 19 11:36:21.775294 containerd[1767]: time="2025-03-19T11:36:21.773014500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 19 11:36:21.775294 containerd[1767]: time="2025-03-19T11:36:21.773028140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 19 11:36:21.775294 containerd[1767]: time="2025-03-19T11:36:21.773040900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 19 11:36:21.775294 containerd[1767]: time="2025-03-19T11:36:21.773061940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 19 11:36:21.775294 containerd[1767]: time="2025-03-19T11:36:21.773074620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 19 11:36:21.775294 containerd[1767]: time="2025-03-19T11:36:21.773086180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 19 11:36:21.775294 containerd[1767]: time="2025-03-19T11:36:21.773097860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 19 11:36:21.775294 containerd[1767]: time="2025-03-19T11:36:21.773112220Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 19 11:36:21.775294 containerd[1767]: time="2025-03-19T11:36:21.773132860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 19 11:36:21.775294 containerd[1767]: time="2025-03-19T11:36:21.773145940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 19 11:36:21.775538 containerd[1767]: time="2025-03-19T11:36:21.773164700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 19 11:36:21.775538 containerd[1767]: time="2025-03-19T11:36:21.773218100Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 19 11:36:21.775538 containerd[1767]: time="2025-03-19T11:36:21.773248060Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 19 11:36:21.775538 containerd[1767]: time="2025-03-19T11:36:21.773259100Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 19 11:36:21.775538 containerd[1767]: time="2025-03-19T11:36:21.773273340Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 19 11:36:21.775538 containerd[1767]: time="2025-03-19T11:36:21.773282620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 19 11:36:21.775538 containerd[1767]: time="2025-03-19T11:36:21.773294380Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 19 11:36:21.775538 containerd[1767]: time="2025-03-19T11:36:21.773303420Z" level=info msg="NRI interface is disabled by configuration." Mar 19 11:36:21.775538 containerd[1767]: time="2025-03-19T11:36:21.773312780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 19 11:36:21.775691 containerd[1767]: time="2025-03-19T11:36:21.773597220Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 19 11:36:21.775691 containerd[1767]: time="2025-03-19T11:36:21.773647340Z" level=info msg="Connect containerd service" Mar 19 11:36:21.775691 containerd[1767]: time="2025-03-19T11:36:21.773677740Z" level=info msg="using legacy CRI server" Mar 19 11:36:21.775691 containerd[1767]: time="2025-03-19T11:36:21.773686380Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 19 11:36:21.775691 containerd[1767]: time="2025-03-19T11:36:21.773802420Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 19 11:36:21.777110 containerd[1767]: time="2025-03-19T11:36:21.777069420Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:36:21.777307 containerd[1767]: time="2025-03-19T11:36:21.777269100Z" level=info msg="Start subscribing containerd event" Mar 19 11:36:21.777369 containerd[1767]: time="2025-03-19T11:36:21.777326060Z" level=info msg="Start recovering state" Mar 19 11:36:21.777419 containerd[1767]: time="2025-03-19T11:36:21.777400980Z" level=info msg="Start event monitor" Mar 19 11:36:21.777446 containerd[1767]: time="2025-03-19T11:36:21.777419540Z" level=info msg="Start snapshots syncer" Mar 19 11:36:21.777446 containerd[1767]: time="2025-03-19T11:36:21.777431660Z" level=info msg="Start cni network conf syncer for default" Mar 19 11:36:21.777446 containerd[1767]: time="2025-03-19T11:36:21.777439020Z" level=info msg="Start streaming server" Mar 19 11:36:21.777696 containerd[1767]: time="2025-03-19T11:36:21.777663580Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 19 11:36:21.777824 containerd[1767]: time="2025-03-19T11:36:21.777809060Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 19 11:36:21.778049 systemd[1]: Started containerd.service - containerd container runtime. Mar 19 11:36:21.791611 containerd[1767]: time="2025-03-19T11:36:21.791564620Z" level=info msg="containerd successfully booted in 0.141469s" Mar 19 11:36:21.952420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:36:21.964225 (kubelet)[1894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:36:22.367421 kubelet[1894]: E0319 11:36:22.367336 1894 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:36:22.369405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:36:22.369545 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:36:22.371381 systemd[1]: kubelet.service: Consumed 707ms CPU time, 249.1M memory peak. Mar 19 11:36:23.132587 sshd_keygen[1759]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 19 11:36:23.154326 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 19 11:36:23.166853 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 19 11:36:23.179395 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 19 11:36:23.192223 systemd[1]: issuegen.service: Deactivated successfully. Mar 19 11:36:23.192578 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 19 11:36:23.211774 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 19 11:36:23.221055 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 19 11:36:23.238363 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 19 11:36:23.255613 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 19 11:36:23.269610 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 19 11:36:23.276798 systemd[1]: Reached target getty.target - Login Prompts. Mar 19 11:36:23.282598 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 19 11:36:23.292348 systemd[1]: Startup finished in 709ms (kernel) + 14.708s (initrd) + 19.007s (userspace) = 34.426s. Mar 19 11:36:23.666887 login[1925]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 19 11:36:23.667467 login[1924]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:23.682205 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 19 11:36:23.683122 systemd-logind[1750]: New session 2 of user core. Mar 19 11:36:23.689033 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 19 11:36:23.698551 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 19 11:36:23.706560 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 19 11:36:23.709636 (systemd)[1932]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 19 11:36:23.712402 systemd-logind[1750]: New session c1 of user core. Mar 19 11:36:23.985160 systemd[1932]: Queued start job for default target default.target. Mar 19 11:36:23.992745 systemd[1932]: Created slice app.slice - User Application Slice. Mar 19 11:36:23.992784 systemd[1932]: Reached target paths.target - Paths. Mar 19 11:36:23.992824 systemd[1932]: Reached target timers.target - Timers. Mar 19 11:36:23.994173 systemd[1932]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 19 11:36:24.004536 systemd[1932]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 19 11:36:24.004655 systemd[1932]: Reached target sockets.target - Sockets. Mar 19 11:36:24.004702 systemd[1932]: Reached target basic.target - Basic System. Mar 19 11:36:24.004731 systemd[1932]: Reached target default.target - Main User Target. Mar 19 11:36:24.004757 systemd[1932]: Startup finished in 286ms. Mar 19 11:36:24.005012 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 19 11:36:24.018479 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 19 11:36:24.381618 waagent[1921]: 2025-03-19T11:36:24.381445Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 19 11:36:24.387291 waagent[1921]: 2025-03-19T11:36:24.387187Z INFO Daemon Daemon OS: flatcar 4230.1.0 Mar 19 11:36:24.391699 waagent[1921]: 2025-03-19T11:36:24.391627Z INFO Daemon Daemon Python: 3.11.11 Mar 19 11:36:24.398247 waagent[1921]: 2025-03-19T11:36:24.396120Z INFO Daemon Daemon Run daemon Mar 19 11:36:24.401651 waagent[1921]: 2025-03-19T11:36:24.401590Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.1.0' Mar 19 11:36:24.412438 waagent[1921]: 2025-03-19T11:36:24.412047Z INFO Daemon Daemon Using waagent for provisioning Mar 19 11:36:24.417943 waagent[1921]: 2025-03-19T11:36:24.417869Z INFO Daemon Daemon Activate resource disk Mar 19 11:36:24.422904 waagent[1921]: 2025-03-19T11:36:24.422835Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 19 11:36:24.437022 waagent[1921]: 2025-03-19T11:36:24.436930Z INFO Daemon Daemon Found device: None Mar 19 11:36:24.441777 waagent[1921]: 2025-03-19T11:36:24.441716Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 19 11:36:24.451705 waagent[1921]: 2025-03-19T11:36:24.451634Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 19 11:36:24.464616 waagent[1921]: 2025-03-19T11:36:24.464555Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 19 11:36:24.470953 waagent[1921]: 2025-03-19T11:36:24.470888Z INFO Daemon Daemon Running default provisioning handler Mar 19 11:36:24.483940 waagent[1921]: 2025-03-19T11:36:24.483843Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 19 11:36:24.497761 waagent[1921]: 2025-03-19T11:36:24.497690Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 19 11:36:24.508145 waagent[1921]: 2025-03-19T11:36:24.508051Z INFO Daemon Daemon cloud-init is enabled: False Mar 19 11:36:24.513700 waagent[1921]: 2025-03-19T11:36:24.513631Z INFO Daemon Daemon Copying ovf-env.xml Mar 19 11:36:24.630604 waagent[1921]: 2025-03-19T11:36:24.630509Z INFO Daemon Daemon Successfully mounted dvd Mar 19 11:36:24.648220 waagent[1921]: 2025-03-19T11:36:24.648077Z INFO Daemon Daemon Detect protocol endpoint Mar 19 11:36:24.649225 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 19 11:36:24.655681 waagent[1921]: 2025-03-19T11:36:24.655597Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 19 11:36:24.662646 waagent[1921]: 2025-03-19T11:36:24.662585Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 19 11:36:24.668368 login[1925]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:24.670569 waagent[1921]: 2025-03-19T11:36:24.670069Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 19 11:36:24.676726 waagent[1921]: 2025-03-19T11:36:24.676533Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 19 11:36:24.680115 systemd-logind[1750]: New session 1 of user core. Mar 19 11:36:24.682109 waagent[1921]: 2025-03-19T11:36:24.682042Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 19 11:36:24.688455 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 19 11:36:24.737053 waagent[1921]: 2025-03-19T11:36:24.730837Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 19 11:36:24.737579 waagent[1921]: 2025-03-19T11:36:24.737544Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 19 11:36:24.743326 waagent[1921]: 2025-03-19T11:36:24.743261Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 19 11:36:25.240288 waagent[1921]: 2025-03-19T11:36:25.240016Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 19 11:36:25.247013 waagent[1921]: 2025-03-19T11:36:25.246932Z INFO Daemon Daemon Forcing an update of the goal state. Mar 19 11:36:25.258386 waagent[1921]: 2025-03-19T11:36:25.258324Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 19 11:36:25.280916 waagent[1921]: 2025-03-19T11:36:25.280867Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Mar 19 11:36:25.287102 waagent[1921]: 2025-03-19T11:36:25.287050Z INFO Daemon Mar 19 11:36:25.290282 waagent[1921]: 2025-03-19T11:36:25.290207Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 52edf694-44e2-49fa-a4eb-9650e6779999 eTag: 15619944864025857738 source: Fabric] Mar 19 11:36:25.304044 waagent[1921]: 2025-03-19T11:36:25.303988Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 19 11:36:25.312000 waagent[1921]: 2025-03-19T11:36:25.311945Z INFO Daemon Mar 19 11:36:25.315073 waagent[1921]: 2025-03-19T11:36:25.315012Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 19 11:36:25.327946 waagent[1921]: 2025-03-19T11:36:25.327897Z INFO Daemon Daemon Downloading artifacts profile blob Mar 19 11:36:25.426813 waagent[1921]: 2025-03-19T11:36:25.426704Z INFO Daemon Downloaded certificate {'thumbprint': 'A7B74C3A18408A29BE91C43A9A5449861DC5000F', 'hasPrivateKey': False} Mar 19 11:36:25.437580 waagent[1921]: 2025-03-19T11:36:25.437523Z INFO Daemon Downloaded certificate {'thumbprint': '3165E4264E08F9F2927D7773F91F0A21DE6527EB', 'hasPrivateKey': True} Mar 19 11:36:25.448247 waagent[1921]: 2025-03-19T11:36:25.448189Z INFO Daemon Fetch goal state completed Mar 19 11:36:25.460650 waagent[1921]: 2025-03-19T11:36:25.460575Z INFO Daemon Daemon Starting provisioning Mar 19 11:36:25.465994 waagent[1921]: 2025-03-19T11:36:25.465919Z INFO Daemon Daemon Handle ovf-env.xml. Mar 19 11:36:25.471984 waagent[1921]: 2025-03-19T11:36:25.471917Z INFO Daemon Daemon Set hostname [ci-4230.1.0-a-2247daed6b] Mar 19 11:36:25.486263 waagent[1921]: 2025-03-19T11:36:25.485325Z INFO Daemon Daemon Publish hostname [ci-4230.1.0-a-2247daed6b] Mar 19 11:36:25.492588 waagent[1921]: 2025-03-19T11:36:25.492483Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 19 11:36:25.499310 waagent[1921]: 2025-03-19T11:36:25.499221Z INFO Daemon Daemon Primary interface is [eth0] Mar 19 11:36:25.513533 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:36:25.513543 systemd-networkd[1479]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:36:25.513576 systemd-networkd[1479]: eth0: DHCP lease lost Mar 19 11:36:25.514944 waagent[1921]: 2025-03-19T11:36:25.514847Z INFO Daemon Daemon Create user account if not exists Mar 19 11:36:25.521316 waagent[1921]: 2025-03-19T11:36:25.521221Z INFO Daemon Daemon User core already exists, skip useradd Mar 19 11:36:25.527598 waagent[1921]: 2025-03-19T11:36:25.527514Z INFO Daemon Daemon Configure sudoer Mar 19 11:36:25.532769 waagent[1921]: 2025-03-19T11:36:25.532690Z INFO Daemon Daemon Configure sshd Mar 19 11:36:25.537498 waagent[1921]: 2025-03-19T11:36:25.537424Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 19 11:36:25.551316 waagent[1921]: 2025-03-19T11:36:25.551188Z INFO Daemon Daemon Deploy ssh public key. Mar 19 11:36:25.562372 systemd-networkd[1479]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 19 11:36:26.706652 waagent[1921]: 2025-03-19T11:36:26.706586Z INFO Daemon Daemon Provisioning complete Mar 19 11:36:26.726488 waagent[1921]: 2025-03-19T11:36:26.726421Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 19 11:36:26.733188 waagent[1921]: 2025-03-19T11:36:26.733108Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 19 11:36:26.742957 waagent[1921]: 2025-03-19T11:36:26.742885Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 19 11:36:26.888269 waagent[1987]: 2025-03-19T11:36:26.888164Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 19 11:36:26.894556 waagent[1987]: 2025-03-19T11:36:26.888381Z INFO ExtHandler ExtHandler OS: flatcar 4230.1.0 Mar 19 11:36:26.894556 waagent[1987]: 2025-03-19T11:36:26.888443Z INFO ExtHandler ExtHandler Python: 3.11.11 Mar 19 11:36:26.908355 waagent[1987]: 2025-03-19T11:36:26.908229Z INFO ExtHandler ExtHandler Distro: flatcar-4230.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 19 11:36:26.908560 waagent[1987]: 2025-03-19T11:36:26.908519Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 19 11:36:26.908623 waagent[1987]: 2025-03-19T11:36:26.908592Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 19 11:36:26.917852 waagent[1987]: 2025-03-19T11:36:26.917755Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 19 11:36:26.924739 waagent[1987]: 2025-03-19T11:36:26.924689Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Mar 19 11:36:26.925340 waagent[1987]: 2025-03-19T11:36:26.925291Z INFO ExtHandler Mar 19 11:36:26.925419 waagent[1987]: 2025-03-19T11:36:26.925387Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 80f1bf28-244b-4ce0-9758-7b522fc1ea0b eTag: 15619944864025857738 source: Fabric] Mar 19 11:36:26.925739 waagent[1987]: 2025-03-19T11:36:26.925696Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 19 11:36:26.926360 waagent[1987]: 2025-03-19T11:36:26.926309Z INFO ExtHandler Mar 19 11:36:26.926431 waagent[1987]: 2025-03-19T11:36:26.926401Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 19 11:36:26.931109 waagent[1987]: 2025-03-19T11:36:26.931060Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 19 11:36:27.026111 waagent[1987]: 2025-03-19T11:36:27.025933Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A7B74C3A18408A29BE91C43A9A5449861DC5000F', 'hasPrivateKey': False} Mar 19 11:36:27.026571 waagent[1987]: 2025-03-19T11:36:27.026519Z INFO ExtHandler Downloaded certificate {'thumbprint': '3165E4264E08F9F2927D7773F91F0A21DE6527EB', 'hasPrivateKey': True} Mar 19 11:36:27.027013 waagent[1987]: 2025-03-19T11:36:27.026969Z INFO ExtHandler Fetch goal state completed Mar 19 11:36:27.043423 waagent[1987]: 2025-03-19T11:36:27.043344Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1987 Mar 19 11:36:27.043594 waagent[1987]: 2025-03-19T11:36:27.043556Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 19 11:36:27.045333 waagent[1987]: 2025-03-19T11:36:27.045274Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.1.0', '', 'Flatcar Container Linux by Kinvolk'] Mar 19 11:36:27.045737 waagent[1987]: 2025-03-19T11:36:27.045694Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 19 11:36:27.057452 waagent[1987]: 2025-03-19T11:36:27.057402Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 19 11:36:27.057664 waagent[1987]: 2025-03-19T11:36:27.057623Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 19 11:36:27.064269 waagent[1987]: 2025-03-19T11:36:27.063973Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 19 11:36:27.071537 systemd[1]: Reload requested from client PID 2002 ('systemctl') (unit waagent.service)... Mar 19 11:36:27.071805 systemd[1]: Reloading... Mar 19 11:36:27.180281 zram_generator::config[2044]: No configuration found. Mar 19 11:36:27.293812 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:36:27.395273 systemd[1]: Reloading finished in 323 ms. Mar 19 11:36:27.412262 waagent[1987]: 2025-03-19T11:36:27.409713Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 19 11:36:27.416041 systemd[1]: Reload requested from client PID 2095 ('systemctl') (unit waagent.service)... Mar 19 11:36:27.416058 systemd[1]: Reloading... Mar 19 11:36:27.517264 zram_generator::config[2134]: No configuration found. Mar 19 11:36:27.633867 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:36:27.735110 systemd[1]: Reloading finished in 318 ms. Mar 19 11:36:27.749592 waagent[1987]: 2025-03-19T11:36:27.748699Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 19 11:36:27.749592 waagent[1987]: 2025-03-19T11:36:27.748889Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 19 11:36:27.855332 waagent[1987]: 2025-03-19T11:36:27.855222Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 19 11:36:27.856126 waagent[1987]: 2025-03-19T11:36:27.856040Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 19 11:36:27.857168 waagent[1987]: 2025-03-19T11:36:27.857102Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 19 11:36:27.857326 waagent[1987]: 2025-03-19T11:36:27.857272Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 19 11:36:27.857421 waagent[1987]: 2025-03-19T11:36:27.857388Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 19 11:36:27.857657 waagent[1987]: 2025-03-19T11:36:27.857610Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 19 11:36:27.858327 waagent[1987]: 2025-03-19T11:36:27.858262Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 19 11:36:27.858383 waagent[1987]: 2025-03-19T11:36:27.858329Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 19 11:36:27.858383 waagent[1987]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 19 11:36:27.858383 waagent[1987]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 19 11:36:27.858383 waagent[1987]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 19 11:36:27.858383 waagent[1987]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 19 11:36:27.858383 waagent[1987]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 19 11:36:27.858383 waagent[1987]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 19 11:36:27.859051 waagent[1987]: 2025-03-19T11:36:27.858981Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 19 11:36:27.859278 waagent[1987]: 2025-03-19T11:36:27.859216Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 19 11:36:27.859375 waagent[1987]: 2025-03-19T11:36:27.859340Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 19 11:36:27.859544 waagent[1987]: 2025-03-19T11:36:27.859503Z INFO EnvHandler ExtHandler Configure routes Mar 19 11:36:27.859609 waagent[1987]: 2025-03-19T11:36:27.859580Z INFO EnvHandler ExtHandler Gateway:None Mar 19 11:36:27.859651 waagent[1987]: 2025-03-19T11:36:27.859632Z INFO EnvHandler ExtHandler Routes:None Mar 19 11:36:27.859939 waagent[1987]: 2025-03-19T11:36:27.859860Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 19 11:36:27.860750 waagent[1987]: 2025-03-19T11:36:27.860682Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 19 11:36:27.860912 waagent[1987]: 2025-03-19T11:36:27.860802Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 19 11:36:27.861092 waagent[1987]: 2025-03-19T11:36:27.861046Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 19 11:36:27.869642 waagent[1987]: 2025-03-19T11:36:27.869588Z INFO ExtHandler ExtHandler Mar 19 11:36:27.870277 waagent[1987]: 2025-03-19T11:36:27.869925Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 16bdb272-900c-4054-83f8-6fd34c6d06f6 correlation 1fd9286f-f045-4c0e-88e4-019be6fd108d created: 2025-03-19T11:35:01.269096Z] Mar 19 11:36:27.870680 waagent[1987]: 2025-03-19T11:36:27.870620Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 19 11:36:27.871720 waagent[1987]: 2025-03-19T11:36:27.871659Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Mar 19 11:36:27.880606 waagent[1987]: 2025-03-19T11:36:27.880515Z INFO MonitorHandler ExtHandler Network interfaces: Mar 19 11:36:27.880606 waagent[1987]: Executing ['ip', '-a', '-o', 'link']: Mar 19 11:36:27.880606 waagent[1987]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 19 11:36:27.880606 waagent[1987]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b7:f7:95 brd ff:ff:ff:ff:ff:ff Mar 19 11:36:27.880606 waagent[1987]: 3: enP10178s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b7:f7:95 brd ff:ff:ff:ff:ff:ff\ altname enP10178p0s2 Mar 19 11:36:27.880606 waagent[1987]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 19 11:36:27.880606 waagent[1987]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 19 11:36:27.880606 waagent[1987]: 2: eth0 inet 10.200.20.14/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 19 11:36:27.880606 waagent[1987]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 19 11:36:27.880606 waagent[1987]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 19 11:36:27.880606 waagent[1987]: 2: eth0 inet6 fe80::222:48ff:feb7:f795/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 19 11:36:27.880606 waagent[1987]: 3: enP10178s1 inet6 fe80::222:48ff:feb7:f795/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 19 11:36:27.919226 waagent[1987]: 2025-03-19T11:36:27.919144Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 055E7CA4-4612-4965-84B7-694F738CBB03;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 19 11:36:27.934817 waagent[1987]: 2025-03-19T11:36:27.934718Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 19 11:36:27.934817 waagent[1987]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:36:27.934817 waagent[1987]: pkts bytes target prot opt in out source destination Mar 19 11:36:27.934817 waagent[1987]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:36:27.934817 waagent[1987]: pkts bytes target prot opt in out source destination Mar 19 11:36:27.934817 waagent[1987]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:36:27.934817 waagent[1987]: pkts bytes target prot opt in out source destination Mar 19 11:36:27.934817 waagent[1987]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 19 11:36:27.934817 waagent[1987]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 19 11:36:27.934817 waagent[1987]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 19 11:36:27.938563 waagent[1987]: 2025-03-19T11:36:27.938461Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 19 11:36:27.938563 waagent[1987]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:36:27.938563 waagent[1987]: pkts bytes target prot opt in out source destination Mar 19 11:36:27.938563 waagent[1987]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:36:27.938563 waagent[1987]: pkts bytes target prot opt in out source destination Mar 19 11:36:27.938563 waagent[1987]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:36:27.938563 waagent[1987]: pkts bytes target prot opt in out source destination Mar 19 11:36:27.938563 waagent[1987]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 19 11:36:27.938563 waagent[1987]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 19 11:36:27.938563 waagent[1987]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 19 11:36:27.938873 waagent[1987]: 2025-03-19T11:36:27.938824Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 19 11:36:32.409690 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 19 11:36:32.421514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:36:32.545134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:36:32.556642 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:36:32.630868 kubelet[2229]: E0319 11:36:32.630802 2229 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:36:32.634196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:36:32.634527 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:36:32.634984 systemd[1]: kubelet.service: Consumed 144ms CPU time, 102.9M memory peak. Mar 19 11:36:42.659815 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 19 11:36:42.669496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:36:42.986313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:36:42.994560 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:36:43.038190 kubelet[2244]: E0319 11:36:43.038072 2244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:36:43.040281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:36:43.040430 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:36:43.041358 systemd[1]: kubelet.service: Consumed 142ms CPU time, 102.4M memory peak. Mar 19 11:36:44.505276 chronyd[1746]: Selected source PHC0 Mar 19 11:36:53.159959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 19 11:36:53.167457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:36:53.503202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:36:53.514627 (kubelet)[2259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:36:53.552835 kubelet[2259]: E0319 11:36:53.552772 2259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:36:53.555477 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:36:53.555752 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:36:53.556298 systemd[1]: kubelet.service: Consumed 137ms CPU time, 102.2M memory peak. Mar 19 11:36:53.774851 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 19 11:36:53.779591 systemd[1]: Started sshd@0-10.200.20.14:22-10.200.16.10:54684.service - OpenSSH per-connection server daemon (10.200.16.10:54684). Mar 19 11:36:54.270646 sshd[2267]: Accepted publickey for core from 10.200.16.10 port 54684 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:36:54.272137 sshd-session[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:54.276928 systemd-logind[1750]: New session 3 of user core. Mar 19 11:36:54.285448 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 19 11:36:54.677792 systemd[1]: Started sshd@1-10.200.20.14:22-10.200.16.10:54692.service - OpenSSH per-connection server daemon (10.200.16.10:54692). Mar 19 11:36:55.124137 sshd[2272]: Accepted publickey for core from 10.200.16.10 port 54692 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:36:55.125474 sshd-session[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:55.134304 systemd-logind[1750]: New session 4 of user core. Mar 19 11:36:55.145475 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 19 11:36:55.446640 sshd[2274]: Connection closed by 10.200.16.10 port 54692 Mar 19 11:36:55.447225 sshd-session[2272]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:55.450407 systemd-logind[1750]: Session 4 logged out. Waiting for processes to exit. Mar 19 11:36:55.450666 systemd[1]: sshd@1-10.200.20.14:22-10.200.16.10:54692.service: Deactivated successfully. Mar 19 11:36:55.453165 systemd[1]: session-4.scope: Deactivated successfully. Mar 19 11:36:55.454904 systemd-logind[1750]: Removed session 4. Mar 19 11:36:55.532592 systemd[1]: Started sshd@2-10.200.20.14:22-10.200.16.10:54696.service - OpenSSH per-connection server daemon (10.200.16.10:54696). Mar 19 11:36:55.978321 sshd[2280]: Accepted publickey for core from 10.200.16.10 port 54696 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:36:55.979705 sshd-session[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:55.985495 systemd-logind[1750]: New session 5 of user core. Mar 19 11:36:55.991206 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 19 11:36:56.297716 sshd[2282]: Connection closed by 10.200.16.10 port 54696 Mar 19 11:36:56.298500 sshd-session[2280]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:56.302056 systemd[1]: sshd@2-10.200.20.14:22-10.200.16.10:54696.service: Deactivated successfully. Mar 19 11:36:56.305146 systemd[1]: session-5.scope: Deactivated successfully. Mar 19 11:36:56.306103 systemd-logind[1750]: Session 5 logged out. Waiting for processes to exit. Mar 19 11:36:56.307176 systemd-logind[1750]: Removed session 5. Mar 19 11:36:56.384891 systemd[1]: Started sshd@3-10.200.20.14:22-10.200.16.10:54702.service - OpenSSH per-connection server daemon (10.200.16.10:54702). Mar 19 11:36:56.871822 sshd[2288]: Accepted publickey for core from 10.200.16.10 port 54702 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:36:56.873142 sshd-session[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:56.878482 systemd-logind[1750]: New session 6 of user core. Mar 19 11:36:56.884495 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 19 11:36:57.216292 sshd[2290]: Connection closed by 10.200.16.10 port 54702 Mar 19 11:36:57.216881 sshd-session[2288]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:57.220601 systemd[1]: sshd@3-10.200.20.14:22-10.200.16.10:54702.service: Deactivated successfully. Mar 19 11:36:57.222522 systemd[1]: session-6.scope: Deactivated successfully. Mar 19 11:36:57.223404 systemd-logind[1750]: Session 6 logged out. Waiting for processes to exit. Mar 19 11:36:57.225535 systemd-logind[1750]: Removed session 6. Mar 19 11:36:57.309953 systemd[1]: Started sshd@4-10.200.20.14:22-10.200.16.10:54714.service - OpenSSH per-connection server daemon (10.200.16.10:54714). Mar 19 11:36:57.762642 sshd[2296]: Accepted publickey for core from 10.200.16.10 port 54714 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:36:57.764046 sshd-session[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:57.770215 systemd-logind[1750]: New session 7 of user core. Mar 19 11:36:57.776531 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 19 11:36:58.055804 sudo[2299]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 19 11:36:58.056110 sudo[2299]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:36:58.071924 sudo[2299]: pam_unix(sudo:session): session closed for user root Mar 19 11:36:58.142470 sshd[2298]: Connection closed by 10.200.16.10 port 54714 Mar 19 11:36:58.143340 sshd-session[2296]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:58.147662 systemd-logind[1750]: Session 7 logged out. Waiting for processes to exit. Mar 19 11:36:58.148687 systemd[1]: sshd@4-10.200.20.14:22-10.200.16.10:54714.service: Deactivated successfully. Mar 19 11:36:58.151993 systemd[1]: session-7.scope: Deactivated successfully. Mar 19 11:36:58.153010 systemd-logind[1750]: Removed session 7. Mar 19 11:36:58.235621 systemd[1]: Started sshd@5-10.200.20.14:22-10.200.16.10:54724.service - OpenSSH per-connection server daemon (10.200.16.10:54724). Mar 19 11:36:58.716659 sshd[2305]: Accepted publickey for core from 10.200.16.10 port 54724 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:36:58.718087 sshd-session[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:58.723818 systemd-logind[1750]: New session 8 of user core. Mar 19 11:36:58.729424 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 19 11:36:58.988765 sudo[2309]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 19 11:36:58.989084 sudo[2309]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:36:58.993323 sudo[2309]: pam_unix(sudo:session): session closed for user root Mar 19 11:36:58.999208 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 19 11:36:58.999524 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:36:59.015602 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:36:59.041485 augenrules[2331]: No rules Mar 19 11:36:59.042854 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:36:59.043069 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:36:59.045033 sudo[2308]: pam_unix(sudo:session): session closed for user root Mar 19 11:36:59.115403 sshd[2307]: Connection closed by 10.200.16.10 port 54724 Mar 19 11:36:59.115967 sshd-session[2305]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:59.120182 systemd[1]: sshd@5-10.200.20.14:22-10.200.16.10:54724.service: Deactivated successfully. Mar 19 11:36:59.123208 systemd[1]: session-8.scope: Deactivated successfully. Mar 19 11:36:59.123991 systemd-logind[1750]: Session 8 logged out. Waiting for processes to exit. Mar 19 11:36:59.125013 systemd-logind[1750]: Removed session 8. Mar 19 11:36:59.203040 systemd[1]: Started sshd@6-10.200.20.14:22-10.200.16.10:54240.service - OpenSSH per-connection server daemon (10.200.16.10:54240). Mar 19 11:36:59.690380 sshd[2340]: Accepted publickey for core from 10.200.16.10 port 54240 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:36:59.691737 sshd-session[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:59.696134 systemd-logind[1750]: New session 9 of user core. Mar 19 11:36:59.706539 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 19 11:36:59.962895 sudo[2343]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 19 11:36:59.963174 sudo[2343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:37:00.496633 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 19 11:37:01.175522 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 19 11:37:01.183617 (dockerd)[2360]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 19 11:37:01.441352 dockerd[2360]: time="2025-03-19T11:37:01.439440493Z" level=info msg="Starting up" Mar 19 11:37:01.684056 dockerd[2360]: time="2025-03-19T11:37:01.684013307Z" level=info msg="Loading containers: start." Mar 19 11:37:01.854269 kernel: Initializing XFRM netlink socket Mar 19 11:37:01.942674 systemd-networkd[1479]: docker0: Link UP Mar 19 11:37:01.984721 dockerd[2360]: time="2025-03-19T11:37:01.984681987Z" level=info msg="Loading containers: done." Mar 19 11:37:02.009290 dockerd[2360]: time="2025-03-19T11:37:02.008837398Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 19 11:37:02.009290 dockerd[2360]: time="2025-03-19T11:37:02.008959919Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 19 11:37:02.009290 dockerd[2360]: time="2025-03-19T11:37:02.009096520Z" level=info msg="Daemon has completed initialization" Mar 19 11:37:02.078546 dockerd[2360]: time="2025-03-19T11:37:02.078062016Z" level=info msg="API listen on /run/docker.sock" Mar 19 11:37:02.078350 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 19 11:37:02.670259 containerd[1767]: time="2025-03-19T11:37:02.670202485Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 19 11:37:03.659660 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 19 11:37:03.666531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:37:03.707271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount353070343.mount: Deactivated successfully. Mar 19 11:37:04.054435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:37:04.059650 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:37:04.103666 kubelet[2557]: E0319 11:37:04.103558 2557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:37:04.105617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:37:04.105751 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:37:04.106584 systemd[1]: kubelet.service: Consumed 141ms CPU time, 101.3M memory peak. Mar 19 11:37:05.866681 update_engine[1753]: I20250319 11:37:05.866612 1753 update_attempter.cc:509] Updating boot flags... Mar 19 11:37:05.996421 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2628) Mar 19 11:37:06.044979 containerd[1767]: time="2025-03-19T11:37:06.043029597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:06.047962 containerd[1767]: time="2025-03-19T11:37:06.047408701Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=26231950" Mar 19 11:37:06.055980 containerd[1767]: time="2025-03-19T11:37:06.054613140Z" level=info msg="ImageCreate event name:\"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:06.064483 containerd[1767]: time="2025-03-19T11:37:06.064421313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:06.065367 containerd[1767]: time="2025-03-19T11:37:06.065329158Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"26228750\" in 3.395069953s" Mar 19 11:37:06.065367 containerd[1767]: time="2025-03-19T11:37:06.065367239Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\"" Mar 19 11:37:06.067887 containerd[1767]: time="2025-03-19T11:37:06.067842452Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 19 11:37:06.139269 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2627) Mar 19 11:37:08.257864 containerd[1767]: time="2025-03-19T11:37:08.257799400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:08.262230 containerd[1767]: time="2025-03-19T11:37:08.262167578Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=22530032" Mar 19 11:37:08.268271 containerd[1767]: time="2025-03-19T11:37:08.267327800Z" level=info msg="ImageCreate event name:\"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:08.281472 containerd[1767]: time="2025-03-19T11:37:08.281422378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:08.282804 containerd[1767]: time="2025-03-19T11:37:08.282764024Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"23970828\" in 2.214880572s" Mar 19 11:37:08.282932 containerd[1767]: time="2025-03-19T11:37:08.282916384Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\"" Mar 19 11:37:08.284014 containerd[1767]: time="2025-03-19T11:37:08.283975429Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 19 11:37:10.502309 containerd[1767]: time="2025-03-19T11:37:10.501409269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:10.504392 containerd[1767]: time="2025-03-19T11:37:10.504330881Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=17482561" Mar 19 11:37:10.511949 containerd[1767]: time="2025-03-19T11:37:10.511882392Z" level=info msg="ImageCreate event name:\"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:10.525371 containerd[1767]: time="2025-03-19T11:37:10.525294087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:10.526221 containerd[1767]: time="2025-03-19T11:37:10.526073570Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"18923375\" in 2.241939981s" Mar 19 11:37:10.526221 containerd[1767]: time="2025-03-19T11:37:10.526113250Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\"" Mar 19 11:37:10.527039 containerd[1767]: time="2025-03-19T11:37:10.526738933Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 19 11:37:12.354123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2668909850.mount: Deactivated successfully. Mar 19 11:37:12.788416 containerd[1767]: time="2025-03-19T11:37:12.787669101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:12.791776 containerd[1767]: time="2025-03-19T11:37:12.791727477Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=27370095" Mar 19 11:37:12.795357 containerd[1767]: time="2025-03-19T11:37:12.795299652Z" level=info msg="ImageCreate event name:\"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:12.802845 containerd[1767]: time="2025-03-19T11:37:12.802777243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:12.803889 containerd[1767]: time="2025-03-19T11:37:12.803841687Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"27369114\" in 2.277069434s" Mar 19 11:37:12.803959 containerd[1767]: time="2025-03-19T11:37:12.803899607Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\"" Mar 19 11:37:12.804751 containerd[1767]: time="2025-03-19T11:37:12.804517410Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 19 11:37:14.159711 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 19 11:37:14.167535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:37:15.792110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:37:15.803603 (kubelet)[2751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:37:15.842260 kubelet[2751]: E0319 11:37:15.841290 2751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:37:15.843847 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:37:15.843983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:37:15.844517 systemd[1]: kubelet.service: Consumed 135ms CPU time, 104.2M memory peak. Mar 19 11:37:25.909818 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 19 11:37:25.918428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:37:26.539348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:37:26.549584 (kubelet)[2766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:37:26.634494 kubelet[2766]: E0319 11:37:26.634194 2766 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:37:26.636885 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:37:26.637034 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:37:26.637446 systemd[1]: kubelet.service: Consumed 130ms CPU time, 101.7M memory peak. Mar 19 11:37:27.614595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2149721816.mount: Deactivated successfully. Mar 19 11:37:31.287292 containerd[1767]: time="2025-03-19T11:37:31.286268819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:31.290681 containerd[1767]: time="2025-03-19T11:37:31.290636557Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Mar 19 11:37:31.295939 containerd[1767]: time="2025-03-19T11:37:31.295878299Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:31.302230 containerd[1767]: time="2025-03-19T11:37:31.302152565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:31.303623 containerd[1767]: time="2025-03-19T11:37:31.303482330Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 18.4989018s" Mar 19 11:37:31.303623 containerd[1767]: time="2025-03-19T11:37:31.303523771Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Mar 19 11:37:31.304623 containerd[1767]: time="2025-03-19T11:37:31.304305374Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 19 11:37:32.129419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount599445069.mount: Deactivated successfully. Mar 19 11:37:32.313296 containerd[1767]: time="2025-03-19T11:37:32.312631326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:32.358900 containerd[1767]: time="2025-03-19T11:37:32.358827318Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 19 11:37:32.401731 containerd[1767]: time="2025-03-19T11:37:32.401409615Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:32.408781 containerd[1767]: time="2025-03-19T11:37:32.408703806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:32.409659 containerd[1767]: time="2025-03-19T11:37:32.409525449Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.105075075s" Mar 19 11:37:32.409659 containerd[1767]: time="2025-03-19T11:37:32.409562169Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 19 11:37:32.410506 containerd[1767]: time="2025-03-19T11:37:32.410320933Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 19 11:37:33.917899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1101757763.mount: Deactivated successfully. Mar 19 11:37:36.659799 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 19 11:37:36.666460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:37:36.771809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:37:36.776340 (kubelet)[2850]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:37:36.815285 kubelet[2850]: E0319 11:37:36.815213 2850 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:37:36.817881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:37:36.818019 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:37:36.818768 systemd[1]: kubelet.service: Consumed 135ms CPU time, 101.5M memory peak. Mar 19 11:37:44.233621 containerd[1767]: time="2025-03-19T11:37:44.232283454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:44.236161 containerd[1767]: time="2025-03-19T11:37:44.236118031Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812429" Mar 19 11:37:44.241751 containerd[1767]: time="2025-03-19T11:37:44.241711695Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:44.248622 containerd[1767]: time="2025-03-19T11:37:44.248555724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:44.250300 containerd[1767]: time="2025-03-19T11:37:44.250258412Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 11.839884439s" Mar 19 11:37:44.250470 containerd[1767]: time="2025-03-19T11:37:44.250453653Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Mar 19 11:37:46.909702 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 19 11:37:46.920852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:37:50.119800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:37:50.123124 (kubelet)[2926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:37:50.166369 kubelet[2926]: E0319 11:37:50.165590 2926 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:37:50.167833 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:37:50.167987 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:37:50.168326 systemd[1]: kubelet.service: Consumed 131ms CPU time, 102M memory peak. Mar 19 11:37:50.184109 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:37:50.184446 systemd[1]: kubelet.service: Consumed 131ms CPU time, 102M memory peak. Mar 19 11:37:50.195039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:37:50.225454 systemd[1]: Reload requested from client PID 2941 ('systemctl') (unit session-9.scope)... Mar 19 11:37:50.225474 systemd[1]: Reloading... Mar 19 11:37:50.350367 zram_generator::config[2991]: No configuration found. Mar 19 11:37:50.449334 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:37:50.552978 systemd[1]: Reloading finished in 327 ms. Mar 19 11:37:50.715288 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 19 11:37:50.715383 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 19 11:37:50.715629 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:37:50.721992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:37:58.842830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:37:58.848418 (kubelet)[3052]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:37:58.887829 kubelet[3052]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:37:58.887829 kubelet[3052]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 19 11:37:58.887829 kubelet[3052]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:37:58.888186 kubelet[3052]: I0319 11:37:58.887846 3052 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:37:59.513142 kubelet[3052]: I0319 11:37:59.513104 3052 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 19 11:37:59.514345 kubelet[3052]: I0319 11:37:59.513327 3052 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:37:59.514345 kubelet[3052]: I0319 11:37:59.513924 3052 server.go:954] "Client rotation is on, will bootstrap in background" Mar 19 11:37:59.536440 kubelet[3052]: E0319 11:37:59.536396 3052 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:37:59.536958 kubelet[3052]: I0319 11:37:59.536660 3052 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:37:59.544131 kubelet[3052]: E0319 11:37:59.544088 3052 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:37:59.544131 kubelet[3052]: I0319 11:37:59.544125 3052 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:37:59.547678 kubelet[3052]: I0319 11:37:59.547642 3052 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:37:59.548725 kubelet[3052]: I0319 11:37:59.548674 3052 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:37:59.548916 kubelet[3052]: I0319 11:37:59.548726 3052 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.0-a-2247daed6b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:37:59.549005 kubelet[3052]: I0319 11:37:59.548926 3052 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:37:59.549005 kubelet[3052]: I0319 11:37:59.548936 3052 container_manager_linux.go:304] "Creating device plugin manager" Mar 19 11:37:59.549110 kubelet[3052]: I0319 11:37:59.549090 3052 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:37:59.552543 kubelet[3052]: I0319 11:37:59.552516 3052 kubelet.go:446] "Attempting to sync node with API server" Mar 19 11:37:59.552604 kubelet[3052]: I0319 11:37:59.552554 3052 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:37:59.552604 kubelet[3052]: I0319 11:37:59.552580 3052 kubelet.go:352] "Adding apiserver pod source" Mar 19 11:37:59.552604 kubelet[3052]: I0319 11:37:59.552592 3052 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:37:59.558108 kubelet[3052]: I0319 11:37:59.558075 3052 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:37:59.558637 kubelet[3052]: I0319 11:37:59.558603 3052 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:37:59.558710 kubelet[3052]: W0319 11:37:59.558665 3052 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 19 11:37:59.559304 kubelet[3052]: I0319 11:37:59.559277 3052 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 19 11:37:59.559384 kubelet[3052]: I0319 11:37:59.559316 3052 server.go:1287] "Started kubelet" Mar 19 11:37:59.559491 kubelet[3052]: W0319 11:37:59.559452 3052 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 19 11:37:59.559519 kubelet[3052]: E0319 11:37:59.559505 3052 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:37:59.561048 kubelet[3052]: W0319 11:37:59.560353 3052 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-2247daed6b&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 19 11:37:59.561048 kubelet[3052]: E0319 11:37:59.560404 3052 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-2247daed6b&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:37:59.561048 kubelet[3052]: I0319 11:37:59.560442 3052 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:37:59.565413 kubelet[3052]: I0319 11:37:59.565331 3052 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:37:59.565901 kubelet[3052]: I0319 11:37:59.565872 3052 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:37:59.567340 kubelet[3052]: E0319 11:37:59.567072 3052 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.0-a-2247daed6b.182e31454d5b578a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-a-2247daed6b,UID:ci-4230.1.0-a-2247daed6b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-a-2247daed6b,},FirstTimestamp:2025-03-19 11:37:59.559296906 +0000 UTC m=+0.707647582,LastTimestamp:2025-03-19 11:37:59.559296906 +0000 UTC m=+0.707647582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-a-2247daed6b,}" Mar 19 11:37:59.568924 kubelet[3052]: I0319 11:37:59.568882 3052 server.go:490] "Adding debug handlers to kubelet server" Mar 19 11:37:59.569713 kubelet[3052]: I0319 11:37:59.569693 3052 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:37:59.571463 kubelet[3052]: I0319 11:37:59.571427 3052 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:37:59.573853 kubelet[3052]: E0319 11:37:59.573820 3052 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-2247daed6b\" not found" Mar 19 11:37:59.573963 kubelet[3052]: I0319 11:37:59.573875 3052 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 19 11:37:59.575058 kubelet[3052]: I0319 11:37:59.574640 3052 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 19 11:37:59.575058 kubelet[3052]: I0319 11:37:59.574729 3052 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:37:59.575503 kubelet[3052]: W0319 11:37:59.575446 3052 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 19 11:37:59.575567 kubelet[3052]: E0319 11:37:59.575508 3052 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:37:59.575595 kubelet[3052]: E0319 11:37:59.575572 3052 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-2247daed6b?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="200ms" Mar 19 11:37:59.576756 kubelet[3052]: I0319 11:37:59.576736 3052 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:37:59.577001 kubelet[3052]: I0319 11:37:59.576980 3052 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:37:59.577463 kubelet[3052]: E0319 11:37:59.577441 3052 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:37:59.578799 kubelet[3052]: I0319 11:37:59.578780 3052 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:37:59.609691 kubelet[3052]: I0319 11:37:59.609643 3052 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 19 11:37:59.609691 kubelet[3052]: I0319 11:37:59.609663 3052 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 19 11:37:59.609691 kubelet[3052]: I0319 11:37:59.609685 3052 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:37:59.674015 kubelet[3052]: E0319 11:37:59.673918 3052 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-2247daed6b\" not found" Mar 19 11:37:59.775416 kubelet[3052]: E0319 11:37:59.775276 3052 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-2247daed6b\" not found" Mar 19 11:37:59.776893 kubelet[3052]: E0319 11:37:59.776855 3052 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-2247daed6b?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="400ms" Mar 19 11:37:59.875949 kubelet[3052]: E0319 11:37:59.875901 3052 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-2247daed6b\" not found" Mar 19 11:37:59.912102 kubelet[3052]: I0319 11:37:59.912063 3052 policy_none.go:49] "None policy: Start" Mar 19 11:37:59.912102 kubelet[3052]: I0319 11:37:59.912103 3052 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 19 11:37:59.912521 kubelet[3052]: I0319 11:37:59.912133 3052 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:37:59.925932 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 19 11:37:59.937325 kubelet[3052]: I0319 11:37:59.936601 3052 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:37:59.938455 kubelet[3052]: I0319 11:37:59.938422 3052 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:37:59.938599 kubelet[3052]: I0319 11:37:59.938588 3052 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 19 11:37:59.938667 kubelet[3052]: I0319 11:37:59.938658 3052 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 19 11:37:59.938714 kubelet[3052]: I0319 11:37:59.938706 3052 kubelet.go:2388] "Starting kubelet main sync loop" Mar 19 11:37:59.939042 kubelet[3052]: E0319 11:37:59.938796 3052 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:37:59.940524 kubelet[3052]: W0319 11:37:59.940048 3052 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 19 11:37:59.940524 kubelet[3052]: E0319 11:37:59.940097 3052 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:37:59.946338 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 19 11:37:59.951789 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 19 11:37:59.959554 kubelet[3052]: I0319 11:37:59.959379 3052 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:37:59.959706 kubelet[3052]: I0319 11:37:59.959613 3052 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:37:59.959706 kubelet[3052]: I0319 11:37:59.959627 3052 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:37:59.960853 kubelet[3052]: I0319 11:37:59.960126 3052 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:37:59.961876 kubelet[3052]: E0319 11:37:59.961835 3052 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 19 11:37:59.962889 kubelet[3052]: E0319 11:37:59.962866 3052 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.0-a-2247daed6b\" not found" Mar 19 11:38:00.050105 systemd[1]: Created slice kubepods-burstable-pod39ade8a143f32a0b173bad7c1594ec3a.slice - libcontainer container kubepods-burstable-pod39ade8a143f32a0b173bad7c1594ec3a.slice. Mar 19 11:38:00.061851 kubelet[3052]: I0319 11:38:00.061804 3052 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:00.062217 kubelet[3052]: E0319 11:38:00.062186 3052 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:00.064232 kubelet[3052]: E0319 11:38:00.063921 3052 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.0-a-2247daed6b\" not found" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:00.067792 systemd[1]: Created slice kubepods-burstable-pod8c689b3c2caa84bbf975e400248fb852.slice - libcontainer container kubepods-burstable-pod8c689b3c2caa84bbf975e400248fb852.slice. Mar 19 11:38:00.070523 kubelet[3052]: E0319 11:38:00.070495 3052 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.0-a-2247daed6b\" not found" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:00.073388 systemd[1]: Created slice kubepods-burstable-pod1b207a9170982398103b3a2c98e31e73.slice - libcontainer container kubepods-burstable-pod1b207a9170982398103b3a2c98e31e73.slice. Mar 19 11:38:00.075921 kubelet[3052]: E0319 11:38:00.075894 3052 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.0-a-2247daed6b\" not found" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:00.076400 kubelet[3052]: I0319 11:38:00.076352 3052 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39ade8a143f32a0b173bad7c1594ec3a-ca-certs\") pod \"kube-apiserver-ci-4230.1.0-a-2247daed6b\" (UID: \"39ade8a143f32a0b173bad7c1594ec3a\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:00.076400 kubelet[3052]: I0319 11:38:00.076378 3052 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39ade8a143f32a0b173bad7c1594ec3a-k8s-certs\") pod \"kube-apiserver-ci-4230.1.0-a-2247daed6b\" (UID: \"39ade8a143f32a0b173bad7c1594ec3a\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:00.076660 kubelet[3052]: I0319 11:38:00.076532 3052 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39ade8a143f32a0b173bad7c1594ec3a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.0-a-2247daed6b\" (UID: \"39ade8a143f32a0b173bad7c1594ec3a\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:00.076660 kubelet[3052]: I0319 11:38:00.076559 3052 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b207a9170982398103b3a2c98e31e73-ca-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-2247daed6b\" (UID: \"1b207a9170982398103b3a2c98e31e73\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:00.076660 kubelet[3052]: I0319 11:38:00.076603 3052 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b207a9170982398103b3a2c98e31e73-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.0-a-2247daed6b\" (UID: \"1b207a9170982398103b3a2c98e31e73\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:00.076660 kubelet[3052]: I0319 11:38:00.076622 3052 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c689b3c2caa84bbf975e400248fb852-kubeconfig\") pod \"kube-scheduler-ci-4230.1.0-a-2247daed6b\" (UID: \"8c689b3c2caa84bbf975e400248fb852\") " pod="kube-system/kube-scheduler-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:00.076660 kubelet[3052]: I0319 11:38:00.076640 3052 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b207a9170982398103b3a2c98e31e73-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.0-a-2247daed6b\" (UID: \"1b207a9170982398103b3a2c98e31e73\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:00.076894 kubelet[3052]: I0319 11:38:00.076820 3052 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b207a9170982398103b3a2c98e31e73-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-2247daed6b\" (UID: \"1b207a9170982398103b3a2c98e31e73\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:00.076894 kubelet[3052]: I0319 11:38:00.076872 3052 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b207a9170982398103b3a2c98e31e73-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.0-a-2247daed6b\" (UID: \"1b207a9170982398103b3a2c98e31e73\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:00.177907 kubelet[3052]: E0319 11:38:00.177777 3052 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-2247daed6b?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="800ms" Mar 19 11:38:00.265800 kubelet[3052]: I0319 11:38:00.265189 3052 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:00.266198 kubelet[3052]: E0319 11:38:00.266156 3052 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:00.365963 containerd[1767]: time="2025-03-19T11:38:00.365753988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.0-a-2247daed6b,Uid:39ade8a143f32a0b173bad7c1594ec3a,Namespace:kube-system,Attempt:0,}" Mar 19 11:38:00.372578 containerd[1767]: time="2025-03-19T11:38:00.372466657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.0-a-2247daed6b,Uid:8c689b3c2caa84bbf975e400248fb852,Namespace:kube-system,Attempt:0,}" Mar 19 11:38:00.377821 containerd[1767]: time="2025-03-19T11:38:00.377611119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.0-a-2247daed6b,Uid:1b207a9170982398103b3a2c98e31e73,Namespace:kube-system,Attempt:0,}" Mar 19 11:38:00.434059 kubelet[3052]: W0319 11:38:00.433940 3052 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 19 11:38:00.434059 kubelet[3052]: E0319 11:38:00.434018 3052 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:38:00.669057 kubelet[3052]: I0319 11:38:00.668900 3052 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:00.669493 kubelet[3052]: E0319 11:38:00.669461 3052 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:00.827901 kubelet[3052]: W0319 11:38:00.827815 3052 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 19 11:38:00.827901 kubelet[3052]: E0319 11:38:00.827865 3052 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:38:03.150545 kubelet[3052]: W0319 11:38:00.873594 3052 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-2247daed6b&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 19 11:38:03.150545 kubelet[3052]: E0319 11:38:00.873667 3052 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-2247daed6b&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:38:03.150545 kubelet[3052]: E0319 11:38:00.978910 3052 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-2247daed6b?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="1.6s" Mar 19 11:38:03.150545 kubelet[3052]: I0319 11:38:01.471331 3052 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:03.150545 kubelet[3052]: E0319 11:38:01.471661 3052 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:03.150545 kubelet[3052]: W0319 11:38:01.521447 3052 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 19 11:38:03.150545 kubelet[3052]: E0319 11:38:01.521486 3052 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:38:03.150801 kubelet[3052]: E0319 11:38:01.605024 3052 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:38:03.150801 kubelet[3052]: E0319 11:38:02.580300 3052 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-2247daed6b?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="3.2s" Mar 19 11:38:03.150801 kubelet[3052]: W0319 11:38:02.722194 3052 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-2247daed6b&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 19 11:38:03.150801 kubelet[3052]: E0319 11:38:02.722233 3052 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-2247daed6b&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:38:03.150801 kubelet[3052]: I0319 11:38:03.074194 3052 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:03.150801 kubelet[3052]: E0319 11:38:03.074580 3052 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:03.150801 kubelet[3052]: W0319 11:38:03.133161 3052 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 19 11:38:03.150947 kubelet[3052]: E0319 11:38:03.133200 3052 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:38:03.458281 kubelet[3052]: W0319 11:38:03.458175 3052 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 19 11:38:03.458281 kubelet[3052]: E0319 11:38:03.458219 3052 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:38:03.818285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3852029186.mount: Deactivated successfully. Mar 19 11:38:03.870343 containerd[1767]: time="2025-03-19T11:38:03.870277668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:38:03.894716 containerd[1767]: time="2025-03-19T11:38:03.894512131Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 19 11:38:03.903002 containerd[1767]: time="2025-03-19T11:38:03.902161844Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:38:03.909137 containerd[1767]: time="2025-03-19T11:38:03.909082674Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:38:03.934421 containerd[1767]: time="2025-03-19T11:38:03.934330301Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:38:03.941223 containerd[1767]: time="2025-03-19T11:38:03.941175811Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:38:03.952383 containerd[1767]: time="2025-03-19T11:38:03.952336978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:38:03.953357 containerd[1767]: time="2025-03-19T11:38:03.953320903Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 3.587477673s" Mar 19 11:38:03.958410 containerd[1767]: time="2025-03-19T11:38:03.958348124Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:38:04.340524 kubelet[3052]: W0319 11:38:04.340439 3052 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 19 11:38:04.340524 kubelet[3052]: E0319 11:38:04.340487 3052 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:38:04.903183 containerd[1767]: time="2025-03-19T11:38:04.903075474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 4.525382115s" Mar 19 11:38:05.620949 containerd[1767]: time="2025-03-19T11:38:05.620898876Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 5.248348739s" Mar 19 11:38:05.633411 kubelet[3052]: E0319 11:38:05.633346 3052 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:38:05.781690 kubelet[3052]: E0319 11:38:05.781644 3052 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-2247daed6b?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="6.4s" Mar 19 11:38:06.143437 kubelet[3052]: W0319 11:38:06.143397 3052 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-2247daed6b&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 19 11:38:06.143437 kubelet[3052]: E0319 11:38:06.143446 3052 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-2247daed6b&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:38:06.276463 kubelet[3052]: I0319 11:38:06.276426 3052 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:06.276796 kubelet[3052]: E0319 11:38:06.276768 3052 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:06.495610 containerd[1767]: time="2025-03-19T11:38:06.495460727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:38:06.495610 containerd[1767]: time="2025-03-19T11:38:06.495577928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:38:06.496613 containerd[1767]: time="2025-03-19T11:38:06.495591168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:38:06.498331 containerd[1767]: time="2025-03-19T11:38:06.497718257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:38:06.498591 containerd[1767]: time="2025-03-19T11:38:06.497915098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:38:06.499016 containerd[1767]: time="2025-03-19T11:38:06.498924382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:38:06.499258 containerd[1767]: time="2025-03-19T11:38:06.499183424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:38:06.499667 containerd[1767]: time="2025-03-19T11:38:06.499590705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:38:06.504765 containerd[1767]: time="2025-03-19T11:38:06.504346166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:38:06.504765 containerd[1767]: time="2025-03-19T11:38:06.504409527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:38:06.504765 containerd[1767]: time="2025-03-19T11:38:06.504421807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:38:06.504765 containerd[1767]: time="2025-03-19T11:38:06.504504647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:38:06.535943 systemd[1]: Started cri-containerd-55aee562c70e8ee47f8c97d0cf2279531a84ebfe497ca4ca5a4d063f57f50353.scope - libcontainer container 55aee562c70e8ee47f8c97d0cf2279531a84ebfe497ca4ca5a4d063f57f50353. Mar 19 11:38:06.542792 systemd[1]: Started cri-containerd-033a22abcbe9b06fc130f7ac284060ab68bbcbbd51b6daa4c094fde118e2545b.scope - libcontainer container 033a22abcbe9b06fc130f7ac284060ab68bbcbbd51b6daa4c094fde118e2545b. Mar 19 11:38:06.564468 systemd[1]: Started cri-containerd-a15ba89b59cb6ec2f17a428716047f99985dc8e09dd5e0e7008bff3626a76531.scope - libcontainer container a15ba89b59cb6ec2f17a428716047f99985dc8e09dd5e0e7008bff3626a76531. Mar 19 11:38:06.610557 containerd[1767]: time="2025-03-19T11:38:06.610503394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.0-a-2247daed6b,Uid:8c689b3c2caa84bbf975e400248fb852,Namespace:kube-system,Attempt:0,} returns sandbox id \"55aee562c70e8ee47f8c97d0cf2279531a84ebfe497ca4ca5a4d063f57f50353\"" Mar 19 11:38:06.615870 containerd[1767]: time="2025-03-19T11:38:06.615511656Z" level=info msg="CreateContainer within sandbox \"55aee562c70e8ee47f8c97d0cf2279531a84ebfe497ca4ca5a4d063f57f50353\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 19 11:38:06.619946 containerd[1767]: time="2025-03-19T11:38:06.619896835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.0-a-2247daed6b,Uid:1b207a9170982398103b3a2c98e31e73,Namespace:kube-system,Attempt:0,} returns sandbox id \"a15ba89b59cb6ec2f17a428716047f99985dc8e09dd5e0e7008bff3626a76531\"" Mar 19 11:38:06.625120 containerd[1767]: time="2025-03-19T11:38:06.624847057Z" level=info msg="CreateContainer within sandbox \"a15ba89b59cb6ec2f17a428716047f99985dc8e09dd5e0e7008bff3626a76531\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 19 11:38:06.626764 containerd[1767]: time="2025-03-19T11:38:06.626230223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.0-a-2247daed6b,Uid:39ade8a143f32a0b173bad7c1594ec3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"033a22abcbe9b06fc130f7ac284060ab68bbcbbd51b6daa4c094fde118e2545b\"" Mar 19 11:38:06.633651 containerd[1767]: time="2025-03-19T11:38:06.633605496Z" level=info msg="CreateContainer within sandbox \"033a22abcbe9b06fc130f7ac284060ab68bbcbbd51b6daa4c094fde118e2545b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 19 11:38:06.722823 containerd[1767]: time="2025-03-19T11:38:06.722773848Z" level=info msg="CreateContainer within sandbox \"55aee562c70e8ee47f8c97d0cf2279531a84ebfe497ca4ca5a4d063f57f50353\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4d546f30e54f322638f56c9ccbffa9e6858dab0c8fd0a6c03ba6da7eec9aa1ba\"" Mar 19 11:38:06.723905 containerd[1767]: time="2025-03-19T11:38:06.723865733Z" level=info msg="StartContainer for \"4d546f30e54f322638f56c9ccbffa9e6858dab0c8fd0a6c03ba6da7eec9aa1ba\"" Mar 19 11:38:06.729512 containerd[1767]: time="2025-03-19T11:38:06.729357397Z" level=info msg="CreateContainer within sandbox \"a15ba89b59cb6ec2f17a428716047f99985dc8e09dd5e0e7008bff3626a76531\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"299795e3cda9427509df62ee88b9f85dbb1a75f8139685a2007f4af282b1b3e0\"" Mar 19 11:38:06.731269 containerd[1767]: time="2025-03-19T11:38:06.730716243Z" level=info msg="StartContainer for \"299795e3cda9427509df62ee88b9f85dbb1a75f8139685a2007f4af282b1b3e0\"" Mar 19 11:38:06.742265 containerd[1767]: time="2025-03-19T11:38:06.742202894Z" level=info msg="CreateContainer within sandbox \"033a22abcbe9b06fc130f7ac284060ab68bbcbbd51b6daa4c094fde118e2545b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3c01d6ed2d0b515eb0edc9bf943b29382ed1c500bc2fd889b10c60867bc55d57\"" Mar 19 11:38:06.744153 containerd[1767]: time="2025-03-19T11:38:06.744116102Z" level=info msg="StartContainer for \"3c01d6ed2d0b515eb0edc9bf943b29382ed1c500bc2fd889b10c60867bc55d57\"" Mar 19 11:38:06.752310 systemd[1]: Started cri-containerd-4d546f30e54f322638f56c9ccbffa9e6858dab0c8fd0a6c03ba6da7eec9aa1ba.scope - libcontainer container 4d546f30e54f322638f56c9ccbffa9e6858dab0c8fd0a6c03ba6da7eec9aa1ba. Mar 19 11:38:06.771518 systemd[1]: Started cri-containerd-299795e3cda9427509df62ee88b9f85dbb1a75f8139685a2007f4af282b1b3e0.scope - libcontainer container 299795e3cda9427509df62ee88b9f85dbb1a75f8139685a2007f4af282b1b3e0. Mar 19 11:38:06.793416 systemd[1]: Started cri-containerd-3c01d6ed2d0b515eb0edc9bf943b29382ed1c500bc2fd889b10c60867bc55d57.scope - libcontainer container 3c01d6ed2d0b515eb0edc9bf943b29382ed1c500bc2fd889b10c60867bc55d57. Mar 19 11:38:06.824801 containerd[1767]: time="2025-03-19T11:38:06.824577337Z" level=info msg="StartContainer for \"4d546f30e54f322638f56c9ccbffa9e6858dab0c8fd0a6c03ba6da7eec9aa1ba\" returns successfully" Mar 19 11:38:06.852334 containerd[1767]: time="2025-03-19T11:38:06.851877977Z" level=info msg="StartContainer for \"299795e3cda9427509df62ee88b9f85dbb1a75f8139685a2007f4af282b1b3e0\" returns successfully" Mar 19 11:38:06.860163 containerd[1767]: time="2025-03-19T11:38:06.859352690Z" level=info msg="StartContainer for \"3c01d6ed2d0b515eb0edc9bf943b29382ed1c500bc2fd889b10c60867bc55d57\" returns successfully" Mar 19 11:38:06.964185 kubelet[3052]: E0319 11:38:06.963669 3052 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.0-a-2247daed6b\" not found" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:06.968362 kubelet[3052]: E0319 11:38:06.967876 3052 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.0-a-2247daed6b\" not found" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:06.972499 kubelet[3052]: E0319 11:38:06.972344 3052 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.0-a-2247daed6b\" not found" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:07.068281 systemd[1]: run-containerd-runc-k8s.io-55aee562c70e8ee47f8c97d0cf2279531a84ebfe497ca4ca5a4d063f57f50353-runc.MIeDcU.mount: Deactivated successfully. Mar 19 11:38:07.975957 kubelet[3052]: E0319 11:38:07.975379 3052 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.0-a-2247daed6b\" not found" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:07.975957 kubelet[3052]: E0319 11:38:07.975727 3052 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.0-a-2247daed6b\" not found" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:08.977753 kubelet[3052]: E0319 11:38:08.977573 3052 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.0-a-2247daed6b\" not found" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:09.179251 kubelet[3052]: E0319 11:38:09.177197 3052 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.1.0-a-2247daed6b.182e31454d5b578a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-a-2247daed6b,UID:ci-4230.1.0-a-2247daed6b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-a-2247daed6b,},FirstTimestamp:2025-03-19 11:37:59.559296906 +0000 UTC m=+0.707647582,LastTimestamp:2025-03-19 11:37:59.559296906 +0000 UTC m=+0.707647582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-a-2247daed6b,}" Mar 19 11:38:09.231541 kubelet[3052]: E0319 11:38:09.230860 3052 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.1.0-a-2247daed6b.182e31454e70064f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-a-2247daed6b,UID:ci-4230.1.0-a-2247daed6b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-a-2247daed6b,},FirstTimestamp:2025-03-19 11:37:59.577429583 +0000 UTC m=+0.725780259,LastTimestamp:2025-03-19 11:37:59.577429583 +0000 UTC m=+0.725780259,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-a-2247daed6b,}" Mar 19 11:38:09.302062 kubelet[3052]: E0319 11:38:09.301903 3052 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.1.0-a-2247daed6b.182e3145504fffcd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-a-2247daed6b,UID:ci-4230.1.0-a-2247daed6b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4230.1.0-a-2247daed6b status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-a-2247daed6b,},FirstTimestamp:2025-03-19 11:37:59.608885197 +0000 UTC m=+0.757235873,LastTimestamp:2025-03-19 11:37:59.608885197 +0000 UTC m=+0.757235873,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-a-2247daed6b,}" Mar 19 11:38:09.491052 kubelet[3052]: E0319 11:38:09.490687 3052 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.1.0-a-2247daed6b" not found Mar 19 11:38:09.860612 kubelet[3052]: E0319 11:38:09.860195 3052 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.1.0-a-2247daed6b" not found Mar 19 11:38:09.963687 kubelet[3052]: E0319 11:38:09.963117 3052 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.0-a-2247daed6b\" not found" Mar 19 11:38:10.310403 kubelet[3052]: E0319 11:38:10.310360 3052 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.1.0-a-2247daed6b" not found Mar 19 11:38:11.218889 kubelet[3052]: E0319 11:38:11.218833 3052 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.1.0-a-2247daed6b" not found Mar 19 11:38:12.187071 kubelet[3052]: E0319 11:38:12.187024 3052 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.0-a-2247daed6b\" not found" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:12.680369 kubelet[3052]: I0319 11:38:12.679416 3052 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:12.688836 kubelet[3052]: I0319 11:38:12.688790 3052 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:12.688836 kubelet[3052]: E0319 11:38:12.688839 3052 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4230.1.0-a-2247daed6b\": node \"ci-4230.1.0-a-2247daed6b\" not found" Mar 19 11:38:12.692706 kubelet[3052]: E0319 11:38:12.692594 3052 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-2247daed6b\" not found" Mar 19 11:38:12.793288 kubelet[3052]: E0319 11:38:12.793220 3052 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-2247daed6b\" not found" Mar 19 11:38:12.893779 kubelet[3052]: E0319 11:38:12.893724 3052 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-2247daed6b\" not found" Mar 19 11:38:12.994135 kubelet[3052]: E0319 11:38:12.994017 3052 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-2247daed6b\" not found" Mar 19 11:38:13.026793 systemd[1]: Reload requested from client PID 3328 ('systemctl') (unit session-9.scope)... Mar 19 11:38:13.027092 systemd[1]: Reloading... Mar 19 11:38:13.095126 kubelet[3052]: E0319 11:38:13.095086 3052 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-2247daed6b\" not found" Mar 19 11:38:13.145356 zram_generator::config[3384]: No configuration found. Mar 19 11:38:13.195910 kubelet[3052]: E0319 11:38:13.195865 3052 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-2247daed6b\" not found" Mar 19 11:38:13.241915 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:38:13.296413 kubelet[3052]: E0319 11:38:13.296286 3052 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-2247daed6b\" not found" Mar 19 11:38:13.363923 systemd[1]: Reloading finished in 336 ms. Mar 19 11:38:13.394602 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:38:13.407643 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:38:13.407924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:38:13.408014 systemd[1]: kubelet.service: Consumed 1.092s CPU time, 124M memory peak. Mar 19 11:38:13.414989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:38:13.772313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:38:13.783659 (kubelet)[3439]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:38:13.843862 kubelet[3439]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:38:13.843862 kubelet[3439]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 19 11:38:13.843862 kubelet[3439]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:38:13.843862 kubelet[3439]: I0319 11:38:13.843758 3439 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:38:13.860958 kubelet[3439]: I0319 11:38:13.860896 3439 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 19 11:38:13.864295 kubelet[3439]: I0319 11:38:13.861045 3439 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:38:13.864295 kubelet[3439]: I0319 11:38:13.861427 3439 server.go:954] "Client rotation is on, will bootstrap in background" Mar 19 11:38:13.865210 kubelet[3439]: I0319 11:38:13.865178 3439 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 19 11:38:13.870925 kubelet[3439]: I0319 11:38:13.869115 3439 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:38:13.875570 kubelet[3439]: E0319 11:38:13.875217 3439 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:38:13.875778 kubelet[3439]: I0319 11:38:13.875746 3439 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:38:13.881903 kubelet[3439]: I0319 11:38:13.881861 3439 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:38:13.882407 kubelet[3439]: I0319 11:38:13.882369 3439 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:38:13.882845 kubelet[3439]: I0319 11:38:13.882523 3439 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.0-a-2247daed6b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:38:13.882998 kubelet[3439]: I0319 11:38:13.882985 3439 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:38:13.883056 kubelet[3439]: I0319 11:38:13.883048 3439 container_manager_linux.go:304] "Creating device plugin manager" Mar 19 11:38:13.883155 kubelet[3439]: I0319 11:38:13.883145 3439 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:38:13.883406 kubelet[3439]: I0319 11:38:13.883393 3439 kubelet.go:446] "Attempting to sync node with API server" Mar 19 11:38:13.883485 kubelet[3439]: I0319 11:38:13.883475 3439 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:38:13.883544 kubelet[3439]: I0319 11:38:13.883537 3439 kubelet.go:352] "Adding apiserver pod source" Mar 19 11:38:13.883611 kubelet[3439]: I0319 11:38:13.883603 3439 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:38:13.889683 kubelet[3439]: I0319 11:38:13.889644 3439 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:38:13.890175 kubelet[3439]: I0319 11:38:13.890146 3439 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:38:13.890653 kubelet[3439]: I0319 11:38:13.890625 3439 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 19 11:38:13.890710 kubelet[3439]: I0319 11:38:13.890665 3439 server.go:1287] "Started kubelet" Mar 19 11:38:13.894846 kubelet[3439]: I0319 11:38:13.894790 3439 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:38:13.895572 kubelet[3439]: I0319 11:38:13.895503 3439 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:38:13.895822 kubelet[3439]: I0319 11:38:13.895794 3439 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:38:13.896865 kubelet[3439]: I0319 11:38:13.896843 3439 server.go:490] "Adding debug handlers to kubelet server" Mar 19 11:38:13.900417 kubelet[3439]: E0319 11:38:13.900386 3439 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:38:13.901399 kubelet[3439]: I0319 11:38:13.901365 3439 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:38:13.906715 kubelet[3439]: I0319 11:38:13.901566 3439 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:38:13.907292 kubelet[3439]: I0319 11:38:13.906855 3439 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 19 11:38:13.909553 kubelet[3439]: I0319 11:38:13.909514 3439 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 19 11:38:13.909672 kubelet[3439]: I0319 11:38:13.909661 3439 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:38:13.910327 kubelet[3439]: E0319 11:38:13.909901 3439 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-2247daed6b\" not found" Mar 19 11:38:13.913510 kubelet[3439]: I0319 11:38:13.913467 3439 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:38:13.915225 kubelet[3439]: I0319 11:38:13.914825 3439 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:38:13.915225 kubelet[3439]: I0319 11:38:13.914861 3439 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 19 11:38:13.915225 kubelet[3439]: I0319 11:38:13.914883 3439 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 19 11:38:13.915225 kubelet[3439]: I0319 11:38:13.914890 3439 kubelet.go:2388] "Starting kubelet main sync loop" Mar 19 11:38:13.915225 kubelet[3439]: E0319 11:38:13.914935 3439 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:38:13.922312 kubelet[3439]: I0319 11:38:13.922283 3439 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:38:13.922852 kubelet[3439]: I0319 11:38:13.922596 3439 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:38:13.924429 kubelet[3439]: I0319 11:38:13.924407 3439 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:38:14.015064 kubelet[3439]: E0319 11:38:14.015039 3439 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 19 11:38:14.029722 kubelet[3439]: I0319 11:38:14.029602 3439 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 19 11:38:14.029722 kubelet[3439]: I0319 11:38:14.029623 3439 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 19 11:38:14.029722 kubelet[3439]: I0319 11:38:14.029646 3439 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:38:14.029872 kubelet[3439]: I0319 11:38:14.029845 3439 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 19 11:38:14.029898 kubelet[3439]: I0319 11:38:14.029856 3439 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 19 11:38:14.029898 kubelet[3439]: I0319 11:38:14.029891 3439 policy_none.go:49] "None policy: Start" Mar 19 11:38:14.029946 kubelet[3439]: I0319 11:38:14.029901 3439 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 19 11:38:14.029946 kubelet[3439]: I0319 11:38:14.029912 3439 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:38:14.030030 kubelet[3439]: I0319 11:38:14.030010 3439 state_mem.go:75] "Updated machine memory state" Mar 19 11:38:14.038724 kubelet[3439]: I0319 11:38:14.038691 3439 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:38:14.039563 kubelet[3439]: I0319 11:38:14.039050 3439 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:38:14.039563 kubelet[3439]: I0319 11:38:14.039069 3439 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:38:14.039563 kubelet[3439]: I0319 11:38:14.039378 3439 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:38:14.043178 kubelet[3439]: E0319 11:38:14.043136 3439 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 19 11:38:14.152378 kubelet[3439]: I0319 11:38:14.152132 3439 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:14.164024 kubelet[3439]: I0319 11:38:14.163950 3439 kubelet_node_status.go:125] "Node was previously registered" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:14.164285 kubelet[3439]: I0319 11:38:14.164154 3439 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230.1.0-a-2247daed6b" Mar 19 11:38:14.217066 kubelet[3439]: I0319 11:38:14.216881 3439 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:14.217066 kubelet[3439]: I0319 11:38:14.217036 3439 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:14.221275 kubelet[3439]: I0319 11:38:14.218873 3439 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:14.230406 kubelet[3439]: W0319 11:38:14.230298 3439 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 19 11:38:14.236176 kubelet[3439]: W0319 11:38:14.236131 3439 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 19 11:38:14.236367 kubelet[3439]: W0319 11:38:14.236318 3439 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 19 11:38:14.313022 kubelet[3439]: I0319 11:38:14.312254 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39ade8a143f32a0b173bad7c1594ec3a-ca-certs\") pod \"kube-apiserver-ci-4230.1.0-a-2247daed6b\" (UID: \"39ade8a143f32a0b173bad7c1594ec3a\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:14.313022 kubelet[3439]: I0319 11:38:14.312307 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39ade8a143f32a0b173bad7c1594ec3a-k8s-certs\") pod \"kube-apiserver-ci-4230.1.0-a-2247daed6b\" (UID: \"39ade8a143f32a0b173bad7c1594ec3a\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:14.313022 kubelet[3439]: I0319 11:38:14.312327 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b207a9170982398103b3a2c98e31e73-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-2247daed6b\" (UID: \"1b207a9170982398103b3a2c98e31e73\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:14.313022 kubelet[3439]: I0319 11:38:14.312346 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b207a9170982398103b3a2c98e31e73-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.0-a-2247daed6b\" (UID: \"1b207a9170982398103b3a2c98e31e73\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:14.313022 kubelet[3439]: I0319 11:38:14.312360 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c689b3c2caa84bbf975e400248fb852-kubeconfig\") pod \"kube-scheduler-ci-4230.1.0-a-2247daed6b\" (UID: \"8c689b3c2caa84bbf975e400248fb852\") " pod="kube-system/kube-scheduler-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:14.313230 kubelet[3439]: I0319 11:38:14.312375 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39ade8a143f32a0b173bad7c1594ec3a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.0-a-2247daed6b\" (UID: \"39ade8a143f32a0b173bad7c1594ec3a\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:14.313230 kubelet[3439]: I0319 11:38:14.313212 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b207a9170982398103b3a2c98e31e73-ca-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-2247daed6b\" (UID: \"1b207a9170982398103b3a2c98e31e73\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:14.313230 kubelet[3439]: I0319 11:38:14.313281 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b207a9170982398103b3a2c98e31e73-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.0-a-2247daed6b\" (UID: \"1b207a9170982398103b3a2c98e31e73\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:14.313230 kubelet[3439]: I0319 11:38:14.313305 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b207a9170982398103b3a2c98e31e73-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.0-a-2247daed6b\" (UID: \"1b207a9170982398103b3a2c98e31e73\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-2247daed6b" Mar 19 11:38:14.888089 kubelet[3439]: I0319 11:38:14.887777 3439 apiserver.go:52] "Watching apiserver" Mar 19 11:38:16.600602 kubelet[3439]: I0319 11:38:14.910109 3439 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 19 11:38:16.600602 kubelet[3439]: I0319 11:38:15.043735 3439 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.0-a-2247daed6b" podStartSLOduration=1.04371528 podStartE2EDuration="1.04371528s" podCreationTimestamp="2025-03-19 11:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:38:15.032831231 +0000 UTC m=+1.243650694" watchObservedRunningTime="2025-03-19 11:38:15.04371528 +0000 UTC m=+1.254534623" Mar 19 11:38:16.600602 kubelet[3439]: I0319 11:38:15.056174 3439 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.0-a-2247daed6b" podStartSLOduration=1.056154416 podStartE2EDuration="1.056154416s" podCreationTimestamp="2025-03-19 11:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:38:15.044106882 +0000 UTC m=+1.254926345" watchObservedRunningTime="2025-03-19 11:38:15.056154416 +0000 UTC m=+1.266973799" Mar 19 11:38:16.600602 kubelet[3439]: I0319 11:38:15.069671 3439 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.0-a-2247daed6b" podStartSLOduration=1.069648076 podStartE2EDuration="1.069648076s" podCreationTimestamp="2025-03-19 11:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:38:15.056354257 +0000 UTC m=+1.267173640" watchObservedRunningTime="2025-03-19 11:38:15.069648076 +0000 UTC m=+1.280467459" Mar 19 11:38:16.610511 sudo[3472]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 19 11:38:16.610817 sudo[3472]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 19 11:38:17.089774 sudo[3472]: pam_unix(sudo:session): session closed for user root Mar 19 11:38:17.174265 kubelet[3439]: I0319 11:38:17.173364 3439 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 19 11:38:17.176089 containerd[1767]: time="2025-03-19T11:38:17.175478433Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 19 11:38:17.177554 kubelet[3439]: I0319 11:38:17.175808 3439 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 19 11:38:17.736171 kubelet[3439]: I0319 11:38:17.736127 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmwsh\" (UniqueName: \"kubernetes.io/projected/690a3415-1c8b-4f51-b632-f9fc94c014af-kube-api-access-wmwsh\") pod \"kube-proxy-md84s\" (UID: \"690a3415-1c8b-4f51-b632-f9fc94c014af\") " pod="kube-system/kube-proxy-md84s" Mar 19 11:38:17.736171 kubelet[3439]: I0319 11:38:17.736175 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/690a3415-1c8b-4f51-b632-f9fc94c014af-kube-proxy\") pod \"kube-proxy-md84s\" (UID: \"690a3415-1c8b-4f51-b632-f9fc94c014af\") " pod="kube-system/kube-proxy-md84s" Mar 19 11:38:17.736602 kubelet[3439]: I0319 11:38:17.736195 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/690a3415-1c8b-4f51-b632-f9fc94c014af-xtables-lock\") pod \"kube-proxy-md84s\" (UID: \"690a3415-1c8b-4f51-b632-f9fc94c014af\") " pod="kube-system/kube-proxy-md84s" Mar 19 11:38:17.736602 kubelet[3439]: I0319 11:38:17.736210 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/690a3415-1c8b-4f51-b632-f9fc94c014af-lib-modules\") pod \"kube-proxy-md84s\" (UID: \"690a3415-1c8b-4f51-b632-f9fc94c014af\") " pod="kube-system/kube-proxy-md84s" Mar 19 11:38:17.742566 systemd[1]: Created slice kubepods-besteffort-pod690a3415_1c8b_4f51_b632_f9fc94c014af.slice - libcontainer container kubepods-besteffort-pod690a3415_1c8b_4f51_b632_f9fc94c014af.slice. Mar 19 11:38:18.054331 containerd[1767]: time="2025-03-19T11:38:18.053512768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-md84s,Uid:690a3415-1c8b-4f51-b632-f9fc94c014af,Namespace:kube-system,Attempt:0,}" Mar 19 11:38:18.104682 containerd[1767]: time="2025-03-19T11:38:18.104110435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:38:18.104682 containerd[1767]: time="2025-03-19T11:38:18.104610797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:38:18.104682 containerd[1767]: time="2025-03-19T11:38:18.104624197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:38:18.104994 containerd[1767]: time="2025-03-19T11:38:18.104721758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:38:18.137481 systemd[1]: Started cri-containerd-730e295f301d55e2d8db9f4691e6144834611223190a1546f9eafeddfa21afa4.scope - libcontainer container 730e295f301d55e2d8db9f4691e6144834611223190a1546f9eafeddfa21afa4. Mar 19 11:38:18.156324 systemd[1]: Created slice kubepods-burstable-podf81a79ea_67e0_45b0_83f8_04f11ab82494.slice - libcontainer container kubepods-burstable-podf81a79ea_67e0_45b0_83f8_04f11ab82494.slice. Mar 19 11:38:18.239739 kubelet[3439]: I0319 11:38:18.238735 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-bpf-maps\") pod \"cilium-rw6rs\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " pod="kube-system/cilium-rw6rs" Mar 19 11:38:18.239739 kubelet[3439]: I0319 11:38:18.238781 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-hostproc\") pod \"cilium-rw6rs\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " pod="kube-system/cilium-rw6rs" Mar 19 11:38:18.239739 kubelet[3439]: I0319 11:38:18.238797 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-lib-modules\") pod \"cilium-rw6rs\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " pod="kube-system/cilium-rw6rs" Mar 19 11:38:18.239739 kubelet[3439]: I0319 11:38:18.238812 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-xtables-lock\") pod \"cilium-rw6rs\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " pod="kube-system/cilium-rw6rs" Mar 19 11:38:18.239739 kubelet[3439]: I0319 11:38:18.238831 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-cilium-run\") pod \"cilium-rw6rs\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " pod="kube-system/cilium-rw6rs" Mar 19 11:38:18.239739 kubelet[3439]: I0319 11:38:18.238852 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f81a79ea-67e0-45b0-83f8-04f11ab82494-cilium-config-path\") pod \"cilium-rw6rs\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " pod="kube-system/cilium-rw6rs" Mar 19 11:38:18.240009 kubelet[3439]: I0319 11:38:18.238870 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5wfb\" (UniqueName: \"kubernetes.io/projected/f81a79ea-67e0-45b0-83f8-04f11ab82494-kube-api-access-k5wfb\") pod \"cilium-rw6rs\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " pod="kube-system/cilium-rw6rs" Mar 19 11:38:18.240009 kubelet[3439]: I0319 11:38:18.238889 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-cni-path\") pod \"cilium-rw6rs\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " pod="kube-system/cilium-rw6rs" Mar 19 11:38:18.240009 kubelet[3439]: I0319 11:38:18.238903 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f81a79ea-67e0-45b0-83f8-04f11ab82494-clustermesh-secrets\") pod \"cilium-rw6rs\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " pod="kube-system/cilium-rw6rs" Mar 19 11:38:18.240009 kubelet[3439]: I0319 11:38:18.238921 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-etc-cni-netd\") pod \"cilium-rw6rs\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " pod="kube-system/cilium-rw6rs" Mar 19 11:38:18.240009 kubelet[3439]: I0319 11:38:18.238949 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-host-proc-sys-kernel\") pod \"cilium-rw6rs\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " pod="kube-system/cilium-rw6rs" Mar 19 11:38:18.240009 kubelet[3439]: I0319 11:38:18.238965 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f81a79ea-67e0-45b0-83f8-04f11ab82494-hubble-tls\") pod \"cilium-rw6rs\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " pod="kube-system/cilium-rw6rs" Mar 19 11:38:18.240132 kubelet[3439]: I0319 11:38:18.238987 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-cilium-cgroup\") pod \"cilium-rw6rs\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " pod="kube-system/cilium-rw6rs" Mar 19 11:38:18.240132 kubelet[3439]: I0319 11:38:18.239004 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-host-proc-sys-net\") pod \"cilium-rw6rs\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " pod="kube-system/cilium-rw6rs" Mar 19 11:38:18.276201 systemd[1]: Created slice kubepods-besteffort-pod20a011e1_04e2_4f67_b8c7_4ebf54237e33.slice - libcontainer container kubepods-besteffort-pod20a011e1_04e2_4f67_b8c7_4ebf54237e33.slice. Mar 19 11:38:18.335698 containerd[1767]: time="2025-03-19T11:38:18.335011510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-md84s,Uid:690a3415-1c8b-4f51-b632-f9fc94c014af,Namespace:kube-system,Attempt:0,} returns sandbox id \"730e295f301d55e2d8db9f4691e6144834611223190a1546f9eafeddfa21afa4\"" Mar 19 11:38:18.341192 kubelet[3439]: I0319 11:38:18.339547 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20a011e1-04e2-4f67-b8c7-4ebf54237e33-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-6cr98\" (UID: \"20a011e1-04e2-4f67-b8c7-4ebf54237e33\") " pod="kube-system/cilium-operator-6c4d7847fc-6cr98" Mar 19 11:38:18.341192 kubelet[3439]: I0319 11:38:18.339691 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9m69\" (UniqueName: \"kubernetes.io/projected/20a011e1-04e2-4f67-b8c7-4ebf54237e33-kube-api-access-h9m69\") pod \"cilium-operator-6c4d7847fc-6cr98\" (UID: \"20a011e1-04e2-4f67-b8c7-4ebf54237e33\") " pod="kube-system/cilium-operator-6c4d7847fc-6cr98" Mar 19 11:38:18.358584 containerd[1767]: time="2025-03-19T11:38:18.356530366Z" level=info msg="CreateContainer within sandbox \"730e295f301d55e2d8db9f4691e6144834611223190a1546f9eafeddfa21afa4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 19 11:38:18.465297 containerd[1767]: time="2025-03-19T11:38:18.465204613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rw6rs,Uid:f81a79ea-67e0-45b0-83f8-04f11ab82494,Namespace:kube-system,Attempt:0,}" Mar 19 11:38:18.580306 containerd[1767]: time="2025-03-19T11:38:18.580213808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6cr98,Uid:20a011e1-04e2-4f67-b8c7-4ebf54237e33,Namespace:kube-system,Attempt:0,}" Mar 19 11:38:20.761146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4225501091.mount: Deactivated successfully. Mar 19 11:38:20.805269 containerd[1767]: time="2025-03-19T11:38:20.804995043Z" level=info msg="CreateContainer within sandbox \"730e295f301d55e2d8db9f4691e6144834611223190a1546f9eafeddfa21afa4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"517532e0224729e22a4914b7c578accf7f2af865dbcec6d2fcb666312693063f\"" Mar 19 11:38:20.817488 containerd[1767]: time="2025-03-19T11:38:20.816222172Z" level=info msg="StartContainer for \"517532e0224729e22a4914b7c578accf7f2af865dbcec6d2fcb666312693063f\"" Mar 19 11:38:20.818876 containerd[1767]: time="2025-03-19T11:38:20.818502102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:38:20.818876 containerd[1767]: time="2025-03-19T11:38:20.818567503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:38:20.818876 containerd[1767]: time="2025-03-19T11:38:20.818578983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:38:20.818876 containerd[1767]: time="2025-03-19T11:38:20.818659583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:38:20.851504 systemd[1]: Started cri-containerd-d9d59b3fd0ae8a1ae490a2bfd890f92bfc33908cd1fe321b6065a7c1b46178e4.scope - libcontainer container d9d59b3fd0ae8a1ae490a2bfd890f92bfc33908cd1fe321b6065a7c1b46178e4. Mar 19 11:38:20.852922 containerd[1767]: time="2025-03-19T11:38:20.852555172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:38:20.852922 containerd[1767]: time="2025-03-19T11:38:20.852632252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:38:20.852922 containerd[1767]: time="2025-03-19T11:38:20.852650692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:38:20.852922 containerd[1767]: time="2025-03-19T11:38:20.852766373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:38:20.884468 systemd[1]: Started cri-containerd-517532e0224729e22a4914b7c578accf7f2af865dbcec6d2fcb666312693063f.scope - libcontainer container 517532e0224729e22a4914b7c578accf7f2af865dbcec6d2fcb666312693063f. Mar 19 11:38:20.895493 systemd[1]: Started cri-containerd-877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441.scope - libcontainer container 877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441. Mar 19 11:38:20.933377 containerd[1767]: time="2025-03-19T11:38:20.933309006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6cr98,Uid:20a011e1-04e2-4f67-b8c7-4ebf54237e33,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9d59b3fd0ae8a1ae490a2bfd890f92bfc33908cd1fe321b6065a7c1b46178e4\"" Mar 19 11:38:20.938830 containerd[1767]: time="2025-03-19T11:38:20.938788830Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 19 11:38:20.942827 containerd[1767]: time="2025-03-19T11:38:20.942697647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rw6rs,Uid:f81a79ea-67e0-45b0-83f8-04f11ab82494,Namespace:kube-system,Attempt:0,} returns sandbox id \"877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441\"" Mar 19 11:38:20.966404 containerd[1767]: time="2025-03-19T11:38:20.966347071Z" level=info msg="StartContainer for \"517532e0224729e22a4914b7c578accf7f2af865dbcec6d2fcb666312693063f\" returns successfully" Mar 19 11:38:21.039795 kubelet[3439]: I0319 11:38:21.039603 3439 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-md84s" podStartSLOduration=4.039578552 podStartE2EDuration="4.039578552s" podCreationTimestamp="2025-03-19 11:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:38:21.038558507 +0000 UTC m=+7.249377890" watchObservedRunningTime="2025-03-19 11:38:21.039578552 +0000 UTC m=+7.250397895" Mar 19 11:38:23.050730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3101553834.mount: Deactivated successfully. Mar 19 11:38:23.876979 containerd[1767]: time="2025-03-19T11:38:23.876136150Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:38:23.880525 containerd[1767]: time="2025-03-19T11:38:23.880450769Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 19 11:38:23.886801 containerd[1767]: time="2025-03-19T11:38:23.886731357Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:38:23.888521 containerd[1767]: time="2025-03-19T11:38:23.888472924Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.949406693s" Mar 19 11:38:23.888521 containerd[1767]: time="2025-03-19T11:38:23.888519804Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 19 11:38:23.892859 containerd[1767]: time="2025-03-19T11:38:23.892421981Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 19 11:38:23.893278 containerd[1767]: time="2025-03-19T11:38:23.893223785Z" level=info msg="CreateContainer within sandbox \"d9d59b3fd0ae8a1ae490a2bfd890f92bfc33908cd1fe321b6065a7c1b46178e4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 19 11:38:23.937086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1443589406.mount: Deactivated successfully. Mar 19 11:38:23.951474 containerd[1767]: time="2025-03-19T11:38:23.951410000Z" level=info msg="CreateContainer within sandbox \"d9d59b3fd0ae8a1ae490a2bfd890f92bfc33908cd1fe321b6065a7c1b46178e4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48\"" Mar 19 11:38:23.952263 containerd[1767]: time="2025-03-19T11:38:23.952073923Z" level=info msg="StartContainer for \"c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48\"" Mar 19 11:38:23.976471 systemd[1]: Started cri-containerd-c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48.scope - libcontainer container c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48. Mar 19 11:38:24.009598 containerd[1767]: time="2025-03-19T11:38:24.009546015Z" level=info msg="StartContainer for \"c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48\" returns successfully" Mar 19 11:38:24.053850 kubelet[3439]: I0319 11:38:24.052815 3439 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-6cr98" podStartSLOduration=3.101169182 podStartE2EDuration="6.052797045s" podCreationTimestamp="2025-03-19 11:38:18 +0000 UTC" firstStartedPulling="2025-03-19 11:38:20.938140267 +0000 UTC m=+7.148959650" lastFinishedPulling="2025-03-19 11:38:23.88976817 +0000 UTC m=+10.100587513" observedRunningTime="2025-03-19 11:38:24.052257642 +0000 UTC m=+10.263077025" watchObservedRunningTime="2025-03-19 11:38:24.052797045 +0000 UTC m=+10.263616428" Mar 19 11:38:28.912638 waagent[1987]: 2025-03-19T11:38:28.912542Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Mar 19 11:38:28.923284 waagent[1987]: 2025-03-19T11:38:28.921219Z INFO ExtHandler Mar 19 11:38:28.923284 waagent[1987]: 2025-03-19T11:38:28.922423Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 44275f00-1e52-4cab-89b2-1cc87df772d1 eTag: 11448614988692864094 source: Fabric] Mar 19 11:38:28.923284 waagent[1987]: 2025-03-19T11:38:28.922818Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 19 11:38:28.923812 waagent[1987]: 2025-03-19T11:38:28.923758Z INFO ExtHandler Mar 19 11:38:28.925054 waagent[1987]: 2025-03-19T11:38:28.924317Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Mar 19 11:38:28.930403 waagent[1987]: 2025-03-19T11:38:28.930354Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 19 11:38:29.034717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3054700991.mount: Deactivated successfully. Mar 19 11:38:29.040080 waagent[1987]: 2025-03-19T11:38:29.039977Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A7B74C3A18408A29BE91C43A9A5449861DC5000F', 'hasPrivateKey': False} Mar 19 11:38:29.040579 waagent[1987]: 2025-03-19T11:38:29.040523Z INFO ExtHandler Downloaded certificate {'thumbprint': '3165E4264E08F9F2927D7773F91F0A21DE6527EB', 'hasPrivateKey': True} Mar 19 11:38:29.041009 waagent[1987]: 2025-03-19T11:38:29.040962Z INFO ExtHandler Fetch goal state completed Mar 19 11:38:29.041983 waagent[1987]: 2025-03-19T11:38:29.041357Z INFO ExtHandler ExtHandler Mar 19 11:38:29.041983 waagent[1987]: 2025-03-19T11:38:29.041493Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 7c8df726-f9e7-47a4-9cf5-949bab11eade correlation 1fd9286f-f045-4c0e-88e4-019be6fd108d created: 2025-03-19T11:38:17.488701Z] Mar 19 11:38:29.041983 waagent[1987]: 2025-03-19T11:38:29.041873Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 19 11:38:29.042849 waagent[1987]: 2025-03-19T11:38:29.042795Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 1 ms] Mar 19 11:38:48.349057 containerd[1767]: time="2025-03-19T11:38:48.348978572Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:38:48.363930 containerd[1767]: time="2025-03-19T11:38:48.363858197Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 19 11:38:48.369542 containerd[1767]: time="2025-03-19T11:38:48.369411301Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:38:48.372094 containerd[1767]: time="2025-03-19T11:38:48.372041792Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 24.47957241s" Mar 19 11:38:48.372094 containerd[1767]: time="2025-03-19T11:38:48.372090793Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 19 11:38:48.375370 containerd[1767]: time="2025-03-19T11:38:48.375322767Z" level=info msg="CreateContainer within sandbox \"877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 19 11:38:48.418160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2621831574.mount: Deactivated successfully. Mar 19 11:38:48.433054 containerd[1767]: time="2025-03-19T11:38:48.432997098Z" level=info msg="CreateContainer within sandbox \"877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71\"" Mar 19 11:38:48.434348 containerd[1767]: time="2025-03-19T11:38:48.433609061Z" level=info msg="StartContainer for \"1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71\"" Mar 19 11:38:48.475532 systemd[1]: Started cri-containerd-1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71.scope - libcontainer container 1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71. Mar 19 11:38:48.510260 containerd[1767]: time="2025-03-19T11:38:48.510168275Z" level=info msg="StartContainer for \"1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71\" returns successfully" Mar 19 11:38:48.519784 systemd[1]: cri-containerd-1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71.scope: Deactivated successfully. Mar 19 11:38:48.571676 containerd[1767]: time="2025-03-19T11:38:48.571582263Z" level=info msg="shim disconnected" id=1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71 namespace=k8s.io Mar 19 11:38:48.571676 containerd[1767]: time="2025-03-19T11:38:48.571646503Z" level=warning msg="cleaning up after shim disconnected" id=1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71 namespace=k8s.io Mar 19 11:38:48.571676 containerd[1767]: time="2025-03-19T11:38:48.571695584Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:38:49.095160 containerd[1767]: time="2025-03-19T11:38:49.094771386Z" level=info msg="CreateContainer within sandbox \"877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 19 11:38:49.152992 containerd[1767]: time="2025-03-19T11:38:49.152930719Z" level=info msg="CreateContainer within sandbox \"877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca\"" Mar 19 11:38:49.154022 containerd[1767]: time="2025-03-19T11:38:49.153938644Z" level=info msg="StartContainer for \"44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca\"" Mar 19 11:38:49.180508 systemd[1]: Started cri-containerd-44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca.scope - libcontainer container 44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca. Mar 19 11:38:49.214392 containerd[1767]: time="2025-03-19T11:38:49.214339787Z" level=info msg="StartContainer for \"44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca\" returns successfully" Mar 19 11:38:49.221281 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:38:49.221545 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:38:49.222298 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:38:49.228420 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:38:49.228715 systemd[1]: cri-containerd-44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca.scope: Deactivated successfully. Mar 19 11:38:49.256367 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:38:49.273464 containerd[1767]: time="2025-03-19T11:38:49.273380885Z" level=info msg="shim disconnected" id=44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca namespace=k8s.io Mar 19 11:38:49.273947 containerd[1767]: time="2025-03-19T11:38:49.273679486Z" level=warning msg="cleaning up after shim disconnected" id=44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca namespace=k8s.io Mar 19 11:38:49.273947 containerd[1767]: time="2025-03-19T11:38:49.273695886Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:38:49.414884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71-rootfs.mount: Deactivated successfully. Mar 19 11:38:50.097845 containerd[1767]: time="2025-03-19T11:38:50.097675601Z" level=info msg="CreateContainer within sandbox \"877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 19 11:38:50.162104 containerd[1767]: time="2025-03-19T11:38:50.162024842Z" level=info msg="CreateContainer within sandbox \"877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4\"" Mar 19 11:38:50.165837 containerd[1767]: time="2025-03-19T11:38:50.164128291Z" level=info msg="StartContainer for \"629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4\"" Mar 19 11:38:50.204649 systemd[1]: Started cri-containerd-629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4.scope - libcontainer container 629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4. Mar 19 11:38:50.239881 systemd[1]: cri-containerd-629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4.scope: Deactivated successfully. Mar 19 11:38:50.243423 containerd[1767]: time="2025-03-19T11:38:50.243342357Z" level=info msg="StartContainer for \"629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4\" returns successfully" Mar 19 11:38:50.264671 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4-rootfs.mount: Deactivated successfully. Mar 19 11:38:50.277518 containerd[1767]: time="2025-03-19T11:38:50.277445346Z" level=info msg="shim disconnected" id=629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4 namespace=k8s.io Mar 19 11:38:50.277518 containerd[1767]: time="2025-03-19T11:38:50.277508866Z" level=warning msg="cleaning up after shim disconnected" id=629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4 namespace=k8s.io Mar 19 11:38:50.277518 containerd[1767]: time="2025-03-19T11:38:50.277516706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:38:51.100854 containerd[1767]: time="2025-03-19T11:38:51.100803818Z" level=info msg="CreateContainer within sandbox \"877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 19 11:38:51.156352 containerd[1767]: time="2025-03-19T11:38:51.156297100Z" level=info msg="CreateContainer within sandbox \"877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67\"" Mar 19 11:38:51.157055 containerd[1767]: time="2025-03-19T11:38:51.157010543Z" level=info msg="StartContainer for \"61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67\"" Mar 19 11:38:51.192477 systemd[1]: Started cri-containerd-61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67.scope - libcontainer container 61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67. Mar 19 11:38:51.215888 systemd[1]: cri-containerd-61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67.scope: Deactivated successfully. Mar 19 11:38:51.218015 containerd[1767]: time="2025-03-19T11:38:51.217769088Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf81a79ea_67e0_45b0_83f8_04f11ab82494.slice/cri-containerd-61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67.scope/memory.events\": no such file or directory" Mar 19 11:38:51.223825 containerd[1767]: time="2025-03-19T11:38:51.223776554Z" level=info msg="StartContainer for \"61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67\" returns successfully" Mar 19 11:38:51.243440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67-rootfs.mount: Deactivated successfully. Mar 19 11:38:51.274974 containerd[1767]: time="2025-03-19T11:38:51.274837417Z" level=info msg="shim disconnected" id=61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67 namespace=k8s.io Mar 19 11:38:51.274974 containerd[1767]: time="2025-03-19T11:38:51.274899057Z" level=warning msg="cleaning up after shim disconnected" id=61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67 namespace=k8s.io Mar 19 11:38:51.274974 containerd[1767]: time="2025-03-19T11:38:51.274906618Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:38:52.110507 containerd[1767]: time="2025-03-19T11:38:52.110071141Z" level=info msg="CreateContainer within sandbox \"877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 19 11:38:52.163045 containerd[1767]: time="2025-03-19T11:38:52.162988172Z" level=info msg="CreateContainer within sandbox \"877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c\"" Mar 19 11:38:52.163653 containerd[1767]: time="2025-03-19T11:38:52.163610975Z" level=info msg="StartContainer for \"797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c\"" Mar 19 11:38:52.208521 systemd[1]: Started cri-containerd-797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c.scope - libcontainer container 797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c. Mar 19 11:38:52.240591 containerd[1767]: time="2025-03-19T11:38:52.240532830Z" level=info msg="StartContainer for \"797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c\" returns successfully" Mar 19 11:38:52.399283 kubelet[3439]: I0319 11:38:52.398369 3439 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 19 11:38:52.459274 systemd[1]: Created slice kubepods-burstable-pod9cd60807_cc4e_439b_b806_ab361d130ce7.slice - libcontainer container kubepods-burstable-pod9cd60807_cc4e_439b_b806_ab361d130ce7.slice. Mar 19 11:38:52.469142 systemd[1]: Created slice kubepods-burstable-pod3719a793_e435_4c3f_b255_dc604e70cc7d.slice - libcontainer container kubepods-burstable-pod3719a793_e435_4c3f_b255_dc604e70cc7d.slice. Mar 19 11:38:52.474114 kubelet[3439]: I0319 11:38:52.473916 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djbtx\" (UniqueName: \"kubernetes.io/projected/3719a793-e435-4c3f-b255-dc604e70cc7d-kube-api-access-djbtx\") pod \"coredns-668d6bf9bc-4vzlq\" (UID: \"3719a793-e435-4c3f-b255-dc604e70cc7d\") " pod="kube-system/coredns-668d6bf9bc-4vzlq" Mar 19 11:38:52.474114 kubelet[3439]: I0319 11:38:52.474004 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cd60807-cc4e-439b-b806-ab361d130ce7-config-volume\") pod \"coredns-668d6bf9bc-gd7m4\" (UID: \"9cd60807-cc4e-439b-b806-ab361d130ce7\") " pod="kube-system/coredns-668d6bf9bc-gd7m4" Mar 19 11:38:52.474114 kubelet[3439]: I0319 11:38:52.474028 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3719a793-e435-4c3f-b255-dc604e70cc7d-config-volume\") pod \"coredns-668d6bf9bc-4vzlq\" (UID: \"3719a793-e435-4c3f-b255-dc604e70cc7d\") " pod="kube-system/coredns-668d6bf9bc-4vzlq" Mar 19 11:38:52.474114 kubelet[3439]: I0319 11:38:52.474048 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd4jm\" (UniqueName: \"kubernetes.io/projected/9cd60807-cc4e-439b-b806-ab361d130ce7-kube-api-access-bd4jm\") pod \"coredns-668d6bf9bc-gd7m4\" (UID: \"9cd60807-cc4e-439b-b806-ab361d130ce7\") " pod="kube-system/coredns-668d6bf9bc-gd7m4" Mar 19 11:38:52.766596 containerd[1767]: time="2025-03-19T11:38:52.766548405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gd7m4,Uid:9cd60807-cc4e-439b-b806-ab361d130ce7,Namespace:kube-system,Attempt:0,}" Mar 19 11:38:52.773498 containerd[1767]: time="2025-03-19T11:38:52.773450075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4vzlq,Uid:3719a793-e435-4c3f-b255-dc604e70cc7d,Namespace:kube-system,Attempt:0,}" Mar 19 11:38:53.131904 kubelet[3439]: I0319 11:38:53.130979 3439 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rw6rs" podStartSLOduration=7.7078772319999995 podStartE2EDuration="35.13095947s" podCreationTimestamp="2025-03-19 11:38:18 +0000 UTC" firstStartedPulling="2025-03-19 11:38:20.949672158 +0000 UTC m=+7.160491541" lastFinishedPulling="2025-03-19 11:38:48.372754396 +0000 UTC m=+34.583573779" observedRunningTime="2025-03-19 11:38:53.130771349 +0000 UTC m=+39.341590732" watchObservedRunningTime="2025-03-19 11:38:53.13095947 +0000 UTC m=+39.341778853" Mar 19 11:38:53.149291 systemd[1]: run-containerd-runc-k8s.io-797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c-runc.EYsFFF.mount: Deactivated successfully. Mar 19 11:38:53.756607 systemd[1]: run-containerd-runc-k8s.io-797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c-runc.YjwXNy.mount: Deactivated successfully. Mar 19 11:38:54.478413 systemd-networkd[1479]: cilium_host: Link UP Mar 19 11:38:54.478575 systemd-networkd[1479]: cilium_net: Link UP Mar 19 11:38:54.478579 systemd-networkd[1479]: cilium_net: Gained carrier Mar 19 11:38:54.478753 systemd-networkd[1479]: cilium_host: Gained carrier Mar 19 11:38:54.612170 systemd-networkd[1479]: cilium_vxlan: Link UP Mar 19 11:38:54.612177 systemd-networkd[1479]: cilium_vxlan: Gained carrier Mar 19 11:38:54.649484 systemd-networkd[1479]: cilium_net: Gained IPv6LL Mar 19 11:38:54.992440 kernel: NET: Registered PF_ALG protocol family Mar 19 11:38:55.329420 systemd-networkd[1479]: cilium_host: Gained IPv6LL Mar 19 11:38:55.908565 systemd-networkd[1479]: lxc_health: Link UP Mar 19 11:38:55.918994 systemd-networkd[1479]: lxc_health: Gained carrier Mar 19 11:38:56.097384 systemd-networkd[1479]: cilium_vxlan: Gained IPv6LL Mar 19 11:38:56.401456 systemd-networkd[1479]: lxccca38327f495: Link UP Mar 19 11:38:56.403269 kernel: eth0: renamed from tmp02ea0 Mar 19 11:38:56.410360 systemd-networkd[1479]: lxccca38327f495: Gained carrier Mar 19 11:38:56.411820 systemd-networkd[1479]: lxc6c2f6678cdf6: Link UP Mar 19 11:38:56.431314 kernel: eth0: renamed from tmpe5401 Mar 19 11:38:56.445493 systemd-networkd[1479]: lxc6c2f6678cdf6: Gained carrier Mar 19 11:38:57.377402 systemd-networkd[1479]: lxc_health: Gained IPv6LL Mar 19 11:38:57.697395 systemd-networkd[1479]: lxc6c2f6678cdf6: Gained IPv6LL Mar 19 11:38:57.825398 systemd-networkd[1479]: lxccca38327f495: Gained IPv6LL Mar 19 11:39:00.855297 containerd[1767]: time="2025-03-19T11:39:00.855153192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:39:00.856740 containerd[1767]: time="2025-03-19T11:39:00.856095876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:39:00.856740 containerd[1767]: time="2025-03-19T11:39:00.856442078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:39:00.857148 containerd[1767]: time="2025-03-19T11:39:00.857004280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:39:00.881481 systemd[1]: Started cri-containerd-e54011bd25413fbbd02cb4b8e2e8af3828bab27ed2a6a87ea20c48ed40fc28c7.scope - libcontainer container e54011bd25413fbbd02cb4b8e2e8af3828bab27ed2a6a87ea20c48ed40fc28c7. Mar 19 11:39:00.886952 containerd[1767]: time="2025-03-19T11:39:00.885294724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:39:00.886952 containerd[1767]: time="2025-03-19T11:39:00.885815806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:39:00.886952 containerd[1767]: time="2025-03-19T11:39:00.885828846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:39:00.886952 containerd[1767]: time="2025-03-19T11:39:00.885978287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:39:00.926516 systemd[1]: Started cri-containerd-02ea0c930f7b9ea3c5bcae72c4ac69d367fa793c8cdbd41f7b79d7e237280238.scope - libcontainer container 02ea0c930f7b9ea3c5bcae72c4ac69d367fa793c8cdbd41f7b79d7e237280238. Mar 19 11:39:00.988893 containerd[1767]: time="2025-03-19T11:39:00.988788976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4vzlq,Uid:3719a793-e435-4c3f-b255-dc604e70cc7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e54011bd25413fbbd02cb4b8e2e8af3828bab27ed2a6a87ea20c48ed40fc28c7\"" Mar 19 11:39:00.996857 containerd[1767]: time="2025-03-19T11:39:00.996704770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gd7m4,Uid:9cd60807-cc4e-439b-b806-ab361d130ce7,Namespace:kube-system,Attempt:0,} returns sandbox id \"02ea0c930f7b9ea3c5bcae72c4ac69d367fa793c8cdbd41f7b79d7e237280238\"" Mar 19 11:39:01.002646 containerd[1767]: time="2025-03-19T11:39:01.002547236Z" level=info msg="CreateContainer within sandbox \"e54011bd25413fbbd02cb4b8e2e8af3828bab27ed2a6a87ea20c48ed40fc28c7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:39:01.023728 containerd[1767]: time="2025-03-19T11:39:01.023648328Z" level=info msg="CreateContainer within sandbox \"02ea0c930f7b9ea3c5bcae72c4ac69d367fa793c8cdbd41f7b79d7e237280238\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:39:01.097058 containerd[1767]: time="2025-03-19T11:39:01.096994168Z" level=info msg="CreateContainer within sandbox \"e54011bd25413fbbd02cb4b8e2e8af3828bab27ed2a6a87ea20c48ed40fc28c7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a80d429ca1e1889838d33ba799e706ed0e9711e71bf17017faf72abd57beb6c8\"" Mar 19 11:39:01.097915 containerd[1767]: time="2025-03-19T11:39:01.097860332Z" level=info msg="StartContainer for \"a80d429ca1e1889838d33ba799e706ed0e9711e71bf17017faf72abd57beb6c8\"" Mar 19 11:39:01.127481 systemd[1]: Started cri-containerd-a80d429ca1e1889838d33ba799e706ed0e9711e71bf17017faf72abd57beb6c8.scope - libcontainer container a80d429ca1e1889838d33ba799e706ed0e9711e71bf17017faf72abd57beb6c8. Mar 19 11:39:01.142146 containerd[1767]: time="2025-03-19T11:39:01.141407002Z" level=info msg="CreateContainer within sandbox \"02ea0c930f7b9ea3c5bcae72c4ac69d367fa793c8cdbd41f7b79d7e237280238\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"02401b92962e3b33dee853b994bbfc6768d722c4221f5840c22f02bf0426a0db\"" Mar 19 11:39:01.144275 containerd[1767]: time="2025-03-19T11:39:01.143622372Z" level=info msg="StartContainer for \"02401b92962e3b33dee853b994bbfc6768d722c4221f5840c22f02bf0426a0db\"" Mar 19 11:39:01.178283 containerd[1767]: time="2025-03-19T11:39:01.176090674Z" level=info msg="StartContainer for \"a80d429ca1e1889838d33ba799e706ed0e9711e71bf17017faf72abd57beb6c8\" returns successfully" Mar 19 11:39:01.182573 systemd[1]: Started cri-containerd-02401b92962e3b33dee853b994bbfc6768d722c4221f5840c22f02bf0426a0db.scope - libcontainer container 02401b92962e3b33dee853b994bbfc6768d722c4221f5840c22f02bf0426a0db. Mar 19 11:39:01.246977 containerd[1767]: time="2025-03-19T11:39:01.246928703Z" level=info msg="StartContainer for \"02401b92962e3b33dee853b994bbfc6768d722c4221f5840c22f02bf0426a0db\" returns successfully" Mar 19 11:39:02.158153 kubelet[3439]: I0319 11:39:02.158075 3439 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4vzlq" podStartSLOduration=44.158039923 podStartE2EDuration="44.158039923s" podCreationTimestamp="2025-03-19 11:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:39:02.156566037 +0000 UTC m=+48.367385420" watchObservedRunningTime="2025-03-19 11:39:02.158039923 +0000 UTC m=+48.368859306" Mar 19 11:39:02.206179 kubelet[3439]: I0319 11:39:02.206100 3439 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gd7m4" podStartSLOduration=44.206077253 podStartE2EDuration="44.206077253s" podCreationTimestamp="2025-03-19 11:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:39:02.18026902 +0000 UTC m=+48.391088403" watchObservedRunningTime="2025-03-19 11:39:02.206077253 +0000 UTC m=+48.416896636" Mar 19 11:39:02.959090 sudo[2343]: pam_unix(sudo:session): session closed for user root Mar 19 11:39:03.039433 sshd[2342]: Connection closed by 10.200.16.10 port 54240 Mar 19 11:39:03.040363 sshd-session[2340]: pam_unix(sshd:session): session closed for user core Mar 19 11:39:03.045671 systemd[1]: sshd@6-10.200.20.14:22-10.200.16.10:54240.service: Deactivated successfully. Mar 19 11:39:03.045758 systemd-logind[1750]: Session 9 logged out. Waiting for processes to exit. Mar 19 11:39:03.049980 systemd[1]: session-9.scope: Deactivated successfully. Mar 19 11:39:03.050531 systemd[1]: session-9.scope: Consumed 7.514s CPU time, 260.4M memory peak. Mar 19 11:39:03.052790 systemd-logind[1750]: Removed session 9. Mar 19 11:40:21.944671 update_engine[1753]: I20250319 11:40:21.944609 1753 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 19 11:40:21.944671 update_engine[1753]: I20250319 11:40:21.944664 1753 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 19 11:40:21.945130 update_engine[1753]: I20250319 11:40:21.944839 1753 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 19 11:40:21.946928 update_engine[1753]: I20250319 11:40:21.946664 1753 omaha_request_params.cc:62] Current group set to beta Mar 19 11:40:21.946928 update_engine[1753]: I20250319 11:40:21.946780 1753 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 19 11:40:21.946928 update_engine[1753]: I20250319 11:40:21.946789 1753 update_attempter.cc:643] Scheduling an action processor start. Mar 19 11:40:21.946928 update_engine[1753]: I20250319 11:40:21.946808 1753 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 19 11:40:21.946928 update_engine[1753]: I20250319 11:40:21.946847 1753 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 19 11:40:21.946928 update_engine[1753]: I20250319 11:40:21.946899 1753 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 19 11:40:21.946928 update_engine[1753]: I20250319 11:40:21.946906 1753 omaha_request_action.cc:272] Request: Mar 19 11:40:21.946928 update_engine[1753]: Mar 19 11:40:21.946928 update_engine[1753]: Mar 19 11:40:21.946928 update_engine[1753]: Mar 19 11:40:21.946928 update_engine[1753]: Mar 19 11:40:21.946928 update_engine[1753]: Mar 19 11:40:21.946928 update_engine[1753]: Mar 19 11:40:21.946928 update_engine[1753]: Mar 19 11:40:21.946928 update_engine[1753]: Mar 19 11:40:21.946928 update_engine[1753]: I20250319 11:40:21.946913 1753 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 19 11:40:21.947696 locksmithd[1786]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 19 11:40:21.948152 update_engine[1753]: I20250319 11:40:21.948108 1753 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 19 11:40:21.948575 update_engine[1753]: I20250319 11:40:21.948532 1753 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 19 11:40:22.040136 update_engine[1753]: E20250319 11:40:22.040068 1753 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 19 11:40:22.040306 update_engine[1753]: I20250319 11:40:22.040186 1753 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 19 11:40:31.874860 update_engine[1753]: I20250319 11:40:31.874768 1753 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 19 11:40:31.875378 update_engine[1753]: I20250319 11:40:31.875036 1753 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 19 11:40:31.875378 update_engine[1753]: I20250319 11:40:31.875360 1753 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 19 11:40:31.886710 update_engine[1753]: E20250319 11:40:31.886647 1753 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 19 11:40:31.886851 update_engine[1753]: I20250319 11:40:31.886739 1753 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 19 11:40:41.875408 update_engine[1753]: I20250319 11:40:41.875289 1753 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 19 11:40:41.875729 update_engine[1753]: I20250319 11:40:41.875526 1753 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 19 11:40:41.875864 update_engine[1753]: I20250319 11:40:41.875777 1753 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 19 11:40:41.921572 update_engine[1753]: E20250319 11:40:41.921509 1753 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 19 11:40:41.921715 update_engine[1753]: I20250319 11:40:41.921608 1753 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 19 11:40:45.034116 systemd[1]: Started sshd@7-10.200.20.14:22-10.200.16.10:52758.service - OpenSSH per-connection server daemon (10.200.16.10:52758). Mar 19 11:40:45.522750 sshd[4966]: Accepted publickey for core from 10.200.16.10 port 52758 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:40:45.524682 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:40:45.529876 systemd-logind[1750]: New session 10 of user core. Mar 19 11:40:45.542500 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 19 11:40:46.017413 sshd[4968]: Connection closed by 10.200.16.10 port 52758 Mar 19 11:40:46.017861 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Mar 19 11:40:46.022764 systemd[1]: sshd@7-10.200.20.14:22-10.200.16.10:52758.service: Deactivated successfully. Mar 19 11:40:46.026299 systemd[1]: session-10.scope: Deactivated successfully. Mar 19 11:40:46.027643 systemd-logind[1750]: Session 10 logged out. Waiting for processes to exit. Mar 19 11:40:46.028941 systemd-logind[1750]: Removed session 10. Mar 19 11:40:51.111757 systemd[1]: Started sshd@8-10.200.20.14:22-10.200.16.10:55152.service - OpenSSH per-connection server daemon (10.200.16.10:55152). Mar 19 11:40:51.598516 sshd[4982]: Accepted publickey for core from 10.200.16.10 port 55152 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:40:51.599971 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:40:51.606491 systemd-logind[1750]: New session 11 of user core. Mar 19 11:40:51.608475 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 19 11:40:51.866882 update_engine[1753]: I20250319 11:40:51.866710 1753 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 19 11:40:51.867252 update_engine[1753]: I20250319 11:40:51.866962 1753 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 19 11:40:51.867287 update_engine[1753]: I20250319 11:40:51.867229 1753 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 19 11:40:51.884568 update_engine[1753]: E20250319 11:40:51.883534 1753 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 19 11:40:51.884568 update_engine[1753]: I20250319 11:40:51.883659 1753 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 19 11:40:51.884568 update_engine[1753]: I20250319 11:40:51.883667 1753 omaha_request_action.cc:617] Omaha request response: Mar 19 11:40:51.884568 update_engine[1753]: E20250319 11:40:51.883795 1753 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 19 11:40:51.884568 update_engine[1753]: I20250319 11:40:51.883819 1753 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 19 11:40:51.884568 update_engine[1753]: I20250319 11:40:51.883826 1753 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 19 11:40:51.884568 update_engine[1753]: I20250319 11:40:51.883831 1753 update_attempter.cc:306] Processing Done. Mar 19 11:40:51.884568 update_engine[1753]: E20250319 11:40:51.883849 1753 update_attempter.cc:619] Update failed. Mar 19 11:40:51.884568 update_engine[1753]: I20250319 11:40:51.883859 1753 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 19 11:40:51.884568 update_engine[1753]: I20250319 11:40:51.883864 1753 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 19 11:40:51.884568 update_engine[1753]: I20250319 11:40:51.883870 1753 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 19 11:40:51.884568 update_engine[1753]: I20250319 11:40:51.883962 1753 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 19 11:40:51.884568 update_engine[1753]: I20250319 11:40:51.883991 1753 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 19 11:40:51.884568 update_engine[1753]: I20250319 11:40:51.883998 1753 omaha_request_action.cc:272] Request: Mar 19 11:40:51.884568 update_engine[1753]: Mar 19 11:40:51.884568 update_engine[1753]: Mar 19 11:40:51.885524 update_engine[1753]: Mar 19 11:40:51.885524 update_engine[1753]: Mar 19 11:40:51.885524 update_engine[1753]: Mar 19 11:40:51.885524 update_engine[1753]: Mar 19 11:40:51.885524 update_engine[1753]: I20250319 11:40:51.884004 1753 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 19 11:40:51.885524 update_engine[1753]: I20250319 11:40:51.884199 1753 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 19 11:40:51.885524 update_engine[1753]: I20250319 11:40:51.884510 1753 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 19 11:40:51.885661 locksmithd[1786]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 19 11:40:51.945749 update_engine[1753]: E20250319 11:40:51.945525 1753 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 19 11:40:51.946639 update_engine[1753]: I20250319 11:40:51.946326 1753 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 19 11:40:51.946883 update_engine[1753]: I20250319 11:40:51.946736 1753 omaha_request_action.cc:617] Omaha request response: Mar 19 11:40:51.946883 update_engine[1753]: I20250319 11:40:51.946758 1753 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 19 11:40:51.946883 update_engine[1753]: I20250319 11:40:51.946764 1753 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 19 11:40:51.946883 update_engine[1753]: I20250319 11:40:51.946770 1753 update_attempter.cc:306] Processing Done. Mar 19 11:40:51.946883 update_engine[1753]: I20250319 11:40:51.946777 1753 update_attempter.cc:310] Error event sent. Mar 19 11:40:51.946883 update_engine[1753]: I20250319 11:40:51.946789 1753 update_check_scheduler.cc:74] Next update check in 43m53s Mar 19 11:40:51.947192 locksmithd[1786]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 19 11:40:52.023818 sshd[4985]: Connection closed by 10.200.16.10 port 55152 Mar 19 11:40:52.024697 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Mar 19 11:40:52.030752 systemd[1]: sshd@8-10.200.20.14:22-10.200.16.10:55152.service: Deactivated successfully. Mar 19 11:40:52.034260 systemd[1]: session-11.scope: Deactivated successfully. Mar 19 11:40:52.036016 systemd-logind[1750]: Session 11 logged out. Waiting for processes to exit. Mar 19 11:40:52.038175 systemd-logind[1750]: Removed session 11. Mar 19 11:40:57.107375 systemd[1]: Started sshd@9-10.200.20.14:22-10.200.16.10:55164.service - OpenSSH per-connection server daemon (10.200.16.10:55164). Mar 19 11:40:57.556809 sshd[4999]: Accepted publickey for core from 10.200.16.10 port 55164 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:40:57.558347 sshd-session[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:40:57.564335 systemd-logind[1750]: New session 12 of user core. Mar 19 11:40:57.570443 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 19 11:40:57.956947 sshd[5001]: Connection closed by 10.200.16.10 port 55164 Mar 19 11:40:57.957991 sshd-session[4999]: pam_unix(sshd:session): session closed for user core Mar 19 11:40:57.962953 systemd-logind[1750]: Session 12 logged out. Waiting for processes to exit. Mar 19 11:40:57.963168 systemd[1]: sshd@9-10.200.20.14:22-10.200.16.10:55164.service: Deactivated successfully. Mar 19 11:40:57.967429 systemd[1]: session-12.scope: Deactivated successfully. Mar 19 11:40:57.970869 systemd-logind[1750]: Removed session 12. Mar 19 11:41:03.059749 systemd[1]: Started sshd@10-10.200.20.14:22-10.200.16.10:42096.service - OpenSSH per-connection server daemon (10.200.16.10:42096). Mar 19 11:41:03.547607 sshd[5014]: Accepted publickey for core from 10.200.16.10 port 42096 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:41:03.550031 sshd-session[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:41:03.557732 systemd-logind[1750]: New session 13 of user core. Mar 19 11:41:03.564471 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 19 11:41:03.967461 sshd[5016]: Connection closed by 10.200.16.10 port 42096 Mar 19 11:41:03.968172 sshd-session[5014]: pam_unix(sshd:session): session closed for user core Mar 19 11:41:03.975135 systemd[1]: sshd@10-10.200.20.14:22-10.200.16.10:42096.service: Deactivated successfully. Mar 19 11:41:03.979163 systemd[1]: session-13.scope: Deactivated successfully. Mar 19 11:41:03.980401 systemd-logind[1750]: Session 13 logged out. Waiting for processes to exit. Mar 19 11:41:03.981864 systemd-logind[1750]: Removed session 13. Mar 19 11:41:04.060931 systemd[1]: Started sshd@11-10.200.20.14:22-10.200.16.10:42104.service - OpenSSH per-connection server daemon (10.200.16.10:42104). Mar 19 11:41:04.508901 sshd[5028]: Accepted publickey for core from 10.200.16.10 port 42104 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:41:04.509518 sshd-session[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:41:04.514698 systemd-logind[1750]: New session 14 of user core. Mar 19 11:41:04.522470 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 19 11:41:04.937424 sshd[5030]: Connection closed by 10.200.16.10 port 42104 Mar 19 11:41:04.937172 sshd-session[5028]: pam_unix(sshd:session): session closed for user core Mar 19 11:41:04.943208 systemd[1]: sshd@11-10.200.20.14:22-10.200.16.10:42104.service: Deactivated successfully. Mar 19 11:41:04.943212 systemd-logind[1750]: Session 14 logged out. Waiting for processes to exit. Mar 19 11:41:04.946379 systemd[1]: session-14.scope: Deactivated successfully. Mar 19 11:41:04.947759 systemd-logind[1750]: Removed session 14. Mar 19 11:41:05.035370 systemd[1]: Started sshd@12-10.200.20.14:22-10.200.16.10:42108.service - OpenSSH per-connection server daemon (10.200.16.10:42108). Mar 19 11:41:05.487359 sshd[5040]: Accepted publickey for core from 10.200.16.10 port 42108 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:41:05.489070 sshd-session[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:41:05.494749 systemd-logind[1750]: New session 15 of user core. Mar 19 11:41:05.506624 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 19 11:41:05.881824 sshd[5042]: Connection closed by 10.200.16.10 port 42108 Mar 19 11:41:05.882665 sshd-session[5040]: pam_unix(sshd:session): session closed for user core Mar 19 11:41:05.886871 systemd[1]: sshd@12-10.200.20.14:22-10.200.16.10:42108.service: Deactivated successfully. Mar 19 11:41:05.890611 systemd[1]: session-15.scope: Deactivated successfully. Mar 19 11:41:05.893105 systemd-logind[1750]: Session 15 logged out. Waiting for processes to exit. Mar 19 11:41:05.894878 systemd-logind[1750]: Removed session 15. Mar 19 11:41:10.973564 systemd[1]: Started sshd@13-10.200.20.14:22-10.200.16.10:52018.service - OpenSSH per-connection server daemon (10.200.16.10:52018). Mar 19 11:41:11.417828 sshd[5055]: Accepted publickey for core from 10.200.16.10 port 52018 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:41:11.418472 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:41:11.425095 systemd-logind[1750]: New session 16 of user core. Mar 19 11:41:11.434492 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 19 11:41:11.806171 sshd[5057]: Connection closed by 10.200.16.10 port 52018 Mar 19 11:41:11.806876 sshd-session[5055]: pam_unix(sshd:session): session closed for user core Mar 19 11:41:11.810362 systemd[1]: sshd@13-10.200.20.14:22-10.200.16.10:52018.service: Deactivated successfully. Mar 19 11:41:11.813736 systemd[1]: session-16.scope: Deactivated successfully. Mar 19 11:41:11.815689 systemd-logind[1750]: Session 16 logged out. Waiting for processes to exit. Mar 19 11:41:11.817147 systemd-logind[1750]: Removed session 16. Mar 19 11:41:16.895610 systemd[1]: Started sshd@14-10.200.20.14:22-10.200.16.10:52020.service - OpenSSH per-connection server daemon (10.200.16.10:52020). Mar 19 11:41:17.345476 sshd[5071]: Accepted publickey for core from 10.200.16.10 port 52020 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:41:17.347589 sshd-session[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:41:17.355615 systemd-logind[1750]: New session 17 of user core. Mar 19 11:41:17.362497 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 19 11:41:17.746399 sshd[5076]: Connection closed by 10.200.16.10 port 52020 Mar 19 11:41:17.747107 sshd-session[5071]: pam_unix(sshd:session): session closed for user core Mar 19 11:41:17.751309 systemd-logind[1750]: Session 17 logged out. Waiting for processes to exit. Mar 19 11:41:17.752019 systemd[1]: sshd@14-10.200.20.14:22-10.200.16.10:52020.service: Deactivated successfully. Mar 19 11:41:17.754137 systemd[1]: session-17.scope: Deactivated successfully. Mar 19 11:41:17.755836 systemd-logind[1750]: Removed session 17. Mar 19 11:41:17.833755 systemd[1]: Started sshd@15-10.200.20.14:22-10.200.16.10:52036.service - OpenSSH per-connection server daemon (10.200.16.10:52036). Mar 19 11:41:18.278155 sshd[5088]: Accepted publickey for core from 10.200.16.10 port 52036 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:41:18.279670 sshd-session[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:41:18.284864 systemd-logind[1750]: New session 18 of user core. Mar 19 11:41:18.292628 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 19 11:41:18.736538 sshd[5090]: Connection closed by 10.200.16.10 port 52036 Mar 19 11:41:18.737116 sshd-session[5088]: pam_unix(sshd:session): session closed for user core Mar 19 11:41:18.739898 systemd-logind[1750]: Session 18 logged out. Waiting for processes to exit. Mar 19 11:41:18.740139 systemd[1]: sshd@15-10.200.20.14:22-10.200.16.10:52036.service: Deactivated successfully. Mar 19 11:41:18.741928 systemd[1]: session-18.scope: Deactivated successfully. Mar 19 11:41:18.744099 systemd-logind[1750]: Removed session 18. Mar 19 11:41:18.825581 systemd[1]: Started sshd@16-10.200.20.14:22-10.200.16.10:40616.service - OpenSSH per-connection server daemon (10.200.16.10:40616). Mar 19 11:41:19.270352 sshd[5100]: Accepted publickey for core from 10.200.16.10 port 40616 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:41:19.271829 sshd-session[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:41:19.276225 systemd-logind[1750]: New session 19 of user core. Mar 19 11:41:19.282417 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 19 11:41:20.418495 sshd[5102]: Connection closed by 10.200.16.10 port 40616 Mar 19 11:41:20.419602 sshd-session[5100]: pam_unix(sshd:session): session closed for user core Mar 19 11:41:20.425307 systemd[1]: sshd@16-10.200.20.14:22-10.200.16.10:40616.service: Deactivated successfully. Mar 19 11:41:20.428187 systemd[1]: session-19.scope: Deactivated successfully. Mar 19 11:41:20.429137 systemd-logind[1750]: Session 19 logged out. Waiting for processes to exit. Mar 19 11:41:20.430539 systemd-logind[1750]: Removed session 19. Mar 19 11:41:20.508571 systemd[1]: Started sshd@17-10.200.20.14:22-10.200.16.10:40630.service - OpenSSH per-connection server daemon (10.200.16.10:40630). Mar 19 11:41:20.953393 sshd[5119]: Accepted publickey for core from 10.200.16.10 port 40630 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:41:20.954734 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:41:20.961129 systemd-logind[1750]: New session 20 of user core. Mar 19 11:41:20.965475 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 19 11:41:21.465963 sshd[5121]: Connection closed by 10.200.16.10 port 40630 Mar 19 11:41:21.465849 sshd-session[5119]: pam_unix(sshd:session): session closed for user core Mar 19 11:41:21.469498 systemd[1]: sshd@17-10.200.20.14:22-10.200.16.10:40630.service: Deactivated successfully. Mar 19 11:41:21.472099 systemd[1]: session-20.scope: Deactivated successfully. Mar 19 11:41:21.474504 systemd-logind[1750]: Session 20 logged out. Waiting for processes to exit. Mar 19 11:41:21.475919 systemd-logind[1750]: Removed session 20. Mar 19 11:41:21.563919 systemd[1]: Started sshd@18-10.200.20.14:22-10.200.16.10:40646.service - OpenSSH per-connection server daemon (10.200.16.10:40646). Mar 19 11:41:22.050591 sshd[5133]: Accepted publickey for core from 10.200.16.10 port 40646 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:41:22.051966 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:41:22.056317 systemd-logind[1750]: New session 21 of user core. Mar 19 11:41:22.062505 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 19 11:41:22.464409 sshd[5135]: Connection closed by 10.200.16.10 port 40646 Mar 19 11:41:22.465378 sshd-session[5133]: pam_unix(sshd:session): session closed for user core Mar 19 11:41:22.470186 systemd[1]: sshd@18-10.200.20.14:22-10.200.16.10:40646.service: Deactivated successfully. Mar 19 11:41:22.472439 systemd[1]: session-21.scope: Deactivated successfully. Mar 19 11:41:22.473659 systemd-logind[1750]: Session 21 logged out. Waiting for processes to exit. Mar 19 11:41:22.475074 systemd-logind[1750]: Removed session 21. Mar 19 11:41:27.565928 systemd[1]: Started sshd@19-10.200.20.14:22-10.200.16.10:40654.service - OpenSSH per-connection server daemon (10.200.16.10:40654). Mar 19 11:41:28.054175 sshd[5148]: Accepted publickey for core from 10.200.16.10 port 40654 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:41:28.055638 sshd-session[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:41:28.060309 systemd-logind[1750]: New session 22 of user core. Mar 19 11:41:28.066452 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 19 11:41:28.470893 sshd[5154]: Connection closed by 10.200.16.10 port 40654 Mar 19 11:41:28.470783 sshd-session[5148]: pam_unix(sshd:session): session closed for user core Mar 19 11:41:28.474162 systemd-logind[1750]: Session 22 logged out. Waiting for processes to exit. Mar 19 11:41:28.474873 systemd[1]: sshd@19-10.200.20.14:22-10.200.16.10:40654.service: Deactivated successfully. Mar 19 11:41:28.477846 systemd[1]: session-22.scope: Deactivated successfully. Mar 19 11:41:28.480860 systemd-logind[1750]: Removed session 22. Mar 19 11:41:33.563434 systemd[1]: Started sshd@20-10.200.20.14:22-10.200.16.10:38834.service - OpenSSH per-connection server daemon (10.200.16.10:38834). Mar 19 11:41:34.056276 sshd[5166]: Accepted publickey for core from 10.200.16.10 port 38834 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:41:34.057786 sshd-session[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:41:34.063671 systemd-logind[1750]: New session 23 of user core. Mar 19 11:41:34.070489 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 19 11:41:34.472144 sshd[5168]: Connection closed by 10.200.16.10 port 38834 Mar 19 11:41:34.472023 sshd-session[5166]: pam_unix(sshd:session): session closed for user core Mar 19 11:41:34.476755 systemd[1]: sshd@20-10.200.20.14:22-10.200.16.10:38834.service: Deactivated successfully. Mar 19 11:41:34.479064 systemd[1]: session-23.scope: Deactivated successfully. Mar 19 11:41:34.480179 systemd-logind[1750]: Session 23 logged out. Waiting for processes to exit. Mar 19 11:41:34.481531 systemd-logind[1750]: Removed session 23. Mar 19 11:41:39.564542 systemd[1]: Started sshd@21-10.200.20.14:22-10.200.16.10:47616.service - OpenSSH per-connection server daemon (10.200.16.10:47616). Mar 19 11:41:40.012696 sshd[5180]: Accepted publickey for core from 10.200.16.10 port 47616 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:41:40.013181 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:41:40.020735 systemd-logind[1750]: New session 24 of user core. Mar 19 11:41:40.023478 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 19 11:41:40.396276 sshd[5182]: Connection closed by 10.200.16.10 port 47616 Mar 19 11:41:40.396064 sshd-session[5180]: pam_unix(sshd:session): session closed for user core Mar 19 11:41:40.400004 systemd-logind[1750]: Session 24 logged out. Waiting for processes to exit. Mar 19 11:41:40.400649 systemd[1]: sshd@21-10.200.20.14:22-10.200.16.10:47616.service: Deactivated successfully. Mar 19 11:41:40.402945 systemd[1]: session-24.scope: Deactivated successfully. Mar 19 11:41:40.404681 systemd-logind[1750]: Removed session 24. Mar 19 11:41:40.489581 systemd[1]: Started sshd@22-10.200.20.14:22-10.200.16.10:47618.service - OpenSSH per-connection server daemon (10.200.16.10:47618). Mar 19 11:41:40.978792 sshd[5194]: Accepted publickey for core from 10.200.16.10 port 47618 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:41:40.980183 sshd-session[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:41:40.984563 systemd-logind[1750]: New session 25 of user core. Mar 19 11:41:40.991435 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 19 11:41:42.738085 containerd[1767]: time="2025-03-19T11:41:42.738023776Z" level=info msg="StopContainer for \"c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48\" with timeout 30 (s)" Mar 19 11:41:42.744523 containerd[1767]: time="2025-03-19T11:41:42.739694065Z" level=info msg="Stop container \"c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48\" with signal terminated" Mar 19 11:41:42.751340 containerd[1767]: time="2025-03-19T11:41:42.751198127Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:41:42.753428 systemd[1]: cri-containerd-c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48.scope: Deactivated successfully. Mar 19 11:41:42.763914 containerd[1767]: time="2025-03-19T11:41:42.763859035Z" level=info msg="StopContainer for \"797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c\" with timeout 2 (s)" Mar 19 11:41:42.765394 containerd[1767]: time="2025-03-19T11:41:42.764464758Z" level=info msg="Stop container \"797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c\" with signal terminated" Mar 19 11:41:42.777097 systemd-networkd[1479]: lxc_health: Link DOWN Mar 19 11:41:42.777106 systemd-networkd[1479]: lxc_health: Lost carrier Mar 19 11:41:42.797217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48-rootfs.mount: Deactivated successfully. Mar 19 11:41:42.800324 systemd[1]: cri-containerd-797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c.scope: Deactivated successfully. Mar 19 11:41:42.801075 systemd[1]: cri-containerd-797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c.scope: Consumed 7.631s CPU time, 138.1M memory peak, 136K read from disk, 12.9M written to disk. Mar 19 11:41:42.823034 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c-rootfs.mount: Deactivated successfully. Mar 19 11:41:42.899256 containerd[1767]: time="2025-03-19T11:41:42.899063999Z" level=info msg="shim disconnected" id=797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c namespace=k8s.io Mar 19 11:41:42.899482 containerd[1767]: time="2025-03-19T11:41:42.899274880Z" level=warning msg="cleaning up after shim disconnected" id=797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c namespace=k8s.io Mar 19 11:41:42.899482 containerd[1767]: time="2025-03-19T11:41:42.899292080Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:41:42.899482 containerd[1767]: time="2025-03-19T11:41:42.899128719Z" level=info msg="shim disconnected" id=c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48 namespace=k8s.io Mar 19 11:41:42.899482 containerd[1767]: time="2025-03-19T11:41:42.899438441Z" level=warning msg="cleaning up after shim disconnected" id=c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48 namespace=k8s.io Mar 19 11:41:42.899482 containerd[1767]: time="2025-03-19T11:41:42.899445481Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:41:42.928306 containerd[1767]: time="2025-03-19T11:41:42.928229595Z" level=info msg="StopContainer for \"c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48\" returns successfully" Mar 19 11:41:42.929046 containerd[1767]: time="2025-03-19T11:41:42.929013639Z" level=info msg="StopPodSandbox for \"d9d59b3fd0ae8a1ae490a2bfd890f92bfc33908cd1fe321b6065a7c1b46178e4\"" Mar 19 11:41:42.931096 containerd[1767]: time="2025-03-19T11:41:42.929060280Z" level=info msg="Container to stop \"c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:41:42.931039 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9d59b3fd0ae8a1ae490a2bfd890f92bfc33908cd1fe321b6065a7c1b46178e4-shm.mount: Deactivated successfully. Mar 19 11:41:42.932356 containerd[1767]: time="2025-03-19T11:41:42.931933735Z" level=info msg="StopContainer for \"797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c\" returns successfully" Mar 19 11:41:42.934438 containerd[1767]: time="2025-03-19T11:41:42.934360948Z" level=info msg="StopPodSandbox for \"877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441\"" Mar 19 11:41:42.934575 containerd[1767]: time="2025-03-19T11:41:42.934445348Z" level=info msg="Container to stop \"61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:41:42.934624 containerd[1767]: time="2025-03-19T11:41:42.934575749Z" level=info msg="Container to stop \"1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:41:42.934624 containerd[1767]: time="2025-03-19T11:41:42.934587709Z" level=info msg="Container to stop \"44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:41:42.934624 containerd[1767]: time="2025-03-19T11:41:42.934603069Z" level=info msg="Container to stop \"629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:41:42.934685 containerd[1767]: time="2025-03-19T11:41:42.934648789Z" level=info msg="Container to stop \"797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:41:42.937184 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441-shm.mount: Deactivated successfully. Mar 19 11:41:42.941493 systemd[1]: cri-containerd-d9d59b3fd0ae8a1ae490a2bfd890f92bfc33908cd1fe321b6065a7c1b46178e4.scope: Deactivated successfully. Mar 19 11:41:42.948638 systemd[1]: cri-containerd-877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441.scope: Deactivated successfully. Mar 19 11:41:42.993637 containerd[1767]: time="2025-03-19T11:41:42.993498945Z" level=info msg="shim disconnected" id=d9d59b3fd0ae8a1ae490a2bfd890f92bfc33908cd1fe321b6065a7c1b46178e4 namespace=k8s.io Mar 19 11:41:42.993884 containerd[1767]: time="2025-03-19T11:41:42.993863907Z" level=warning msg="cleaning up after shim disconnected" id=d9d59b3fd0ae8a1ae490a2bfd890f92bfc33908cd1fe321b6065a7c1b46178e4 namespace=k8s.io Mar 19 11:41:42.993955 containerd[1767]: time="2025-03-19T11:41:42.993942707Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:41:42.994362 containerd[1767]: time="2025-03-19T11:41:42.994297869Z" level=info msg="shim disconnected" id=877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441 namespace=k8s.io Mar 19 11:41:42.994531 containerd[1767]: time="2025-03-19T11:41:42.994509990Z" level=warning msg="cleaning up after shim disconnected" id=877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441 namespace=k8s.io Mar 19 11:41:42.994597 containerd[1767]: time="2025-03-19T11:41:42.994584670Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:41:43.011722 containerd[1767]: time="2025-03-19T11:41:43.011360240Z" level=info msg="TearDown network for sandbox \"d9d59b3fd0ae8a1ae490a2bfd890f92bfc33908cd1fe321b6065a7c1b46178e4\" successfully" Mar 19 11:41:43.011722 containerd[1767]: time="2025-03-19T11:41:43.011401521Z" level=info msg="StopPodSandbox for \"d9d59b3fd0ae8a1ae490a2bfd890f92bfc33908cd1fe321b6065a7c1b46178e4\" returns successfully" Mar 19 11:41:43.015614 containerd[1767]: time="2025-03-19T11:41:43.015572903Z" level=info msg="TearDown network for sandbox \"877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441\" successfully" Mar 19 11:41:43.016212 containerd[1767]: time="2025-03-19T11:41:43.015864384Z" level=info msg="StopPodSandbox for \"877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441\" returns successfully" Mar 19 11:41:43.174152 kubelet[3439]: I0319 11:41:43.173334 3439 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-xtables-lock\") pod \"f81a79ea-67e0-45b0-83f8-04f11ab82494\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " Mar 19 11:41:43.174152 kubelet[3439]: I0319 11:41:43.173380 3439 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-cni-path\") pod \"f81a79ea-67e0-45b0-83f8-04f11ab82494\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " Mar 19 11:41:43.174152 kubelet[3439]: I0319 11:41:43.173398 3439 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-host-proc-sys-kernel\") pod \"f81a79ea-67e0-45b0-83f8-04f11ab82494\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " Mar 19 11:41:43.174152 kubelet[3439]: I0319 11:41:43.173421 3439 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9m69\" (UniqueName: \"kubernetes.io/projected/20a011e1-04e2-4f67-b8c7-4ebf54237e33-kube-api-access-h9m69\") pod \"20a011e1-04e2-4f67-b8c7-4ebf54237e33\" (UID: \"20a011e1-04e2-4f67-b8c7-4ebf54237e33\") " Mar 19 11:41:43.174152 kubelet[3439]: I0319 11:41:43.173438 3439 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f81a79ea-67e0-45b0-83f8-04f11ab82494-hubble-tls\") pod \"f81a79ea-67e0-45b0-83f8-04f11ab82494\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " Mar 19 11:41:43.174152 kubelet[3439]: I0319 11:41:43.173454 3439 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20a011e1-04e2-4f67-b8c7-4ebf54237e33-cilium-config-path\") pod \"20a011e1-04e2-4f67-b8c7-4ebf54237e33\" (UID: \"20a011e1-04e2-4f67-b8c7-4ebf54237e33\") " Mar 19 11:41:43.174654 kubelet[3439]: I0319 11:41:43.173471 3439 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-hostproc\") pod \"f81a79ea-67e0-45b0-83f8-04f11ab82494\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " Mar 19 11:41:43.174654 kubelet[3439]: I0319 11:41:43.173488 3439 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5wfb\" (UniqueName: \"kubernetes.io/projected/f81a79ea-67e0-45b0-83f8-04f11ab82494-kube-api-access-k5wfb\") pod \"f81a79ea-67e0-45b0-83f8-04f11ab82494\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " Mar 19 11:41:43.174654 kubelet[3439]: I0319 11:41:43.173507 3439 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-lib-modules\") pod \"f81a79ea-67e0-45b0-83f8-04f11ab82494\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " Mar 19 11:41:43.174654 kubelet[3439]: I0319 11:41:43.173528 3439 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f81a79ea-67e0-45b0-83f8-04f11ab82494-clustermesh-secrets\") pod \"f81a79ea-67e0-45b0-83f8-04f11ab82494\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " Mar 19 11:41:43.174654 kubelet[3439]: I0319 11:41:43.173546 3439 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-host-proc-sys-net\") pod \"f81a79ea-67e0-45b0-83f8-04f11ab82494\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " Mar 19 11:41:43.174654 kubelet[3439]: I0319 11:41:43.173560 3439 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-bpf-maps\") pod \"f81a79ea-67e0-45b0-83f8-04f11ab82494\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " Mar 19 11:41:43.174784 kubelet[3439]: I0319 11:41:43.173577 3439 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f81a79ea-67e0-45b0-83f8-04f11ab82494-cilium-config-path\") pod \"f81a79ea-67e0-45b0-83f8-04f11ab82494\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " Mar 19 11:41:43.174784 kubelet[3439]: I0319 11:41:43.173606 3439 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-etc-cni-netd\") pod \"f81a79ea-67e0-45b0-83f8-04f11ab82494\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " Mar 19 11:41:43.174784 kubelet[3439]: I0319 11:41:43.173623 3439 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-cilium-run\") pod \"f81a79ea-67e0-45b0-83f8-04f11ab82494\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " Mar 19 11:41:43.174784 kubelet[3439]: I0319 11:41:43.173636 3439 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-cilium-cgroup\") pod \"f81a79ea-67e0-45b0-83f8-04f11ab82494\" (UID: \"f81a79ea-67e0-45b0-83f8-04f11ab82494\") " Mar 19 11:41:43.174784 kubelet[3439]: I0319 11:41:43.173723 3439 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f81a79ea-67e0-45b0-83f8-04f11ab82494" (UID: "f81a79ea-67e0-45b0-83f8-04f11ab82494"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:41:43.174885 kubelet[3439]: I0319 11:41:43.173761 3439 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f81a79ea-67e0-45b0-83f8-04f11ab82494" (UID: "f81a79ea-67e0-45b0-83f8-04f11ab82494"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:41:43.174885 kubelet[3439]: I0319 11:41:43.173777 3439 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-cni-path" (OuterVolumeSpecName: "cni-path") pod "f81a79ea-67e0-45b0-83f8-04f11ab82494" (UID: "f81a79ea-67e0-45b0-83f8-04f11ab82494"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:41:43.174885 kubelet[3439]: I0319 11:41:43.173791 3439 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f81a79ea-67e0-45b0-83f8-04f11ab82494" (UID: "f81a79ea-67e0-45b0-83f8-04f11ab82494"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:41:43.176545 kubelet[3439]: I0319 11:41:43.176333 3439 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f81a79ea-67e0-45b0-83f8-04f11ab82494" (UID: "f81a79ea-67e0-45b0-83f8-04f11ab82494"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:41:43.176545 kubelet[3439]: I0319 11:41:43.176405 3439 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f81a79ea-67e0-45b0-83f8-04f11ab82494" (UID: "f81a79ea-67e0-45b0-83f8-04f11ab82494"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:41:43.177120 kubelet[3439]: I0319 11:41:43.177093 3439 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f81a79ea-67e0-45b0-83f8-04f11ab82494" (UID: "f81a79ea-67e0-45b0-83f8-04f11ab82494"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:41:43.178367 kubelet[3439]: I0319 11:41:43.177229 3439 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f81a79ea-67e0-45b0-83f8-04f11ab82494" (UID: "f81a79ea-67e0-45b0-83f8-04f11ab82494"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:41:43.178367 kubelet[3439]: I0319 11:41:43.177579 3439 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-hostproc" (OuterVolumeSpecName: "hostproc") pod "f81a79ea-67e0-45b0-83f8-04f11ab82494" (UID: "f81a79ea-67e0-45b0-83f8-04f11ab82494"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:41:43.179102 kubelet[3439]: I0319 11:41:43.179067 3439 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f81a79ea-67e0-45b0-83f8-04f11ab82494" (UID: "f81a79ea-67e0-45b0-83f8-04f11ab82494"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:41:43.179390 kubelet[3439]: I0319 11:41:43.179367 3439 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f81a79ea-67e0-45b0-83f8-04f11ab82494-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f81a79ea-67e0-45b0-83f8-04f11ab82494" (UID: "f81a79ea-67e0-45b0-83f8-04f11ab82494"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 19 11:41:43.179508 kubelet[3439]: I0319 11:41:43.179492 3439 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f81a79ea-67e0-45b0-83f8-04f11ab82494-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f81a79ea-67e0-45b0-83f8-04f11ab82494" (UID: "f81a79ea-67e0-45b0-83f8-04f11ab82494"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 19 11:41:43.179632 kubelet[3439]: I0319 11:41:43.179610 3439 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20a011e1-04e2-4f67-b8c7-4ebf54237e33-kube-api-access-h9m69" (OuterVolumeSpecName: "kube-api-access-h9m69") pod "20a011e1-04e2-4f67-b8c7-4ebf54237e33" (UID: "20a011e1-04e2-4f67-b8c7-4ebf54237e33"). InnerVolumeSpecName "kube-api-access-h9m69". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 19 11:41:43.181763 kubelet[3439]: I0319 11:41:43.181727 3439 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f81a79ea-67e0-45b0-83f8-04f11ab82494-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f81a79ea-67e0-45b0-83f8-04f11ab82494" (UID: "f81a79ea-67e0-45b0-83f8-04f11ab82494"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 19 11:41:43.183948 kubelet[3439]: I0319 11:41:43.183909 3439 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f81a79ea-67e0-45b0-83f8-04f11ab82494-kube-api-access-k5wfb" (OuterVolumeSpecName: "kube-api-access-k5wfb") pod "f81a79ea-67e0-45b0-83f8-04f11ab82494" (UID: "f81a79ea-67e0-45b0-83f8-04f11ab82494"). InnerVolumeSpecName "kube-api-access-k5wfb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 19 11:41:43.184208 kubelet[3439]: I0319 11:41:43.184184 3439 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20a011e1-04e2-4f67-b8c7-4ebf54237e33-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "20a011e1-04e2-4f67-b8c7-4ebf54237e33" (UID: "20a011e1-04e2-4f67-b8c7-4ebf54237e33"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 19 11:41:43.274634 kubelet[3439]: I0319 11:41:43.274501 3439 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-lib-modules\") on node \"ci-4230.1.0-a-2247daed6b\" DevicePath \"\"" Mar 19 11:41:43.274634 kubelet[3439]: I0319 11:41:43.274544 3439 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f81a79ea-67e0-45b0-83f8-04f11ab82494-clustermesh-secrets\") on node \"ci-4230.1.0-a-2247daed6b\" DevicePath \"\"" Mar 19 11:41:43.274634 kubelet[3439]: I0319 11:41:43.274555 3439 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-host-proc-sys-net\") on node \"ci-4230.1.0-a-2247daed6b\" DevicePath \"\"" Mar 19 11:41:43.274634 kubelet[3439]: I0319 11:41:43.274565 3439 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-bpf-maps\") on node \"ci-4230.1.0-a-2247daed6b\" DevicePath \"\"" Mar 19 11:41:43.274634 kubelet[3439]: I0319 11:41:43.274576 3439 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f81a79ea-67e0-45b0-83f8-04f11ab82494-cilium-config-path\") on node \"ci-4230.1.0-a-2247daed6b\" DevicePath \"\"" Mar 19 11:41:43.274634 kubelet[3439]: I0319 11:41:43.274587 3439 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-etc-cni-netd\") on node \"ci-4230.1.0-a-2247daed6b\" DevicePath \"\"" Mar 19 11:41:43.274634 kubelet[3439]: I0319 11:41:43.274597 3439 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-cilium-run\") on node \"ci-4230.1.0-a-2247daed6b\" DevicePath \"\"" Mar 19 11:41:43.274634 kubelet[3439]: I0319 11:41:43.274605 3439 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-cilium-cgroup\") on node \"ci-4230.1.0-a-2247daed6b\" DevicePath \"\"" Mar 19 11:41:43.274896 kubelet[3439]: I0319 11:41:43.274613 3439 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-xtables-lock\") on node \"ci-4230.1.0-a-2247daed6b\" DevicePath \"\"" Mar 19 11:41:43.274896 kubelet[3439]: I0319 11:41:43.274630 3439 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-cni-path\") on node \"ci-4230.1.0-a-2247daed6b\" DevicePath \"\"" Mar 19 11:41:43.274896 kubelet[3439]: I0319 11:41:43.274640 3439 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-host-proc-sys-kernel\") on node \"ci-4230.1.0-a-2247daed6b\" DevicePath \"\"" Mar 19 11:41:43.274896 kubelet[3439]: I0319 11:41:43.274652 3439 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h9m69\" (UniqueName: \"kubernetes.io/projected/20a011e1-04e2-4f67-b8c7-4ebf54237e33-kube-api-access-h9m69\") on node \"ci-4230.1.0-a-2247daed6b\" DevicePath \"\"" Mar 19 11:41:43.274896 kubelet[3439]: I0319 11:41:43.274660 3439 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f81a79ea-67e0-45b0-83f8-04f11ab82494-hubble-tls\") on node \"ci-4230.1.0-a-2247daed6b\" DevicePath \"\"" Mar 19 11:41:43.274896 kubelet[3439]: I0319 11:41:43.274669 3439 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20a011e1-04e2-4f67-b8c7-4ebf54237e33-cilium-config-path\") on node \"ci-4230.1.0-a-2247daed6b\" DevicePath \"\"" Mar 19 11:41:43.274896 kubelet[3439]: I0319 11:41:43.274677 3439 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f81a79ea-67e0-45b0-83f8-04f11ab82494-hostproc\") on node \"ci-4230.1.0-a-2247daed6b\" DevicePath \"\"" Mar 19 11:41:43.274896 kubelet[3439]: I0319 11:41:43.274685 3439 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k5wfb\" (UniqueName: \"kubernetes.io/projected/f81a79ea-67e0-45b0-83f8-04f11ab82494-kube-api-access-k5wfb\") on node \"ci-4230.1.0-a-2247daed6b\" DevicePath \"\"" Mar 19 11:41:43.478632 kubelet[3439]: I0319 11:41:43.478494 3439 scope.go:117] "RemoveContainer" containerID="797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c" Mar 19 11:41:43.481272 containerd[1767]: time="2025-03-19T11:41:43.481205997Z" level=info msg="RemoveContainer for \"797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c\"" Mar 19 11:41:43.487977 systemd[1]: Removed slice kubepods-burstable-podf81a79ea_67e0_45b0_83f8_04f11ab82494.slice - libcontainer container kubepods-burstable-podf81a79ea_67e0_45b0_83f8_04f11ab82494.slice. Mar 19 11:41:43.488759 systemd[1]: kubepods-burstable-podf81a79ea_67e0_45b0_83f8_04f11ab82494.slice: Consumed 7.712s CPU time, 138.6M memory peak, 136K read from disk, 12.9M written to disk. Mar 19 11:41:43.491820 systemd[1]: Removed slice kubepods-besteffort-pod20a011e1_04e2_4f67_b8c7_4ebf54237e33.slice - libcontainer container kubepods-besteffort-pod20a011e1_04e2_4f67_b8c7_4ebf54237e33.slice. Mar 19 11:41:43.565169 containerd[1767]: time="2025-03-19T11:41:43.564949365Z" level=info msg="RemoveContainer for \"797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c\" returns successfully" Mar 19 11:41:43.565460 kubelet[3439]: I0319 11:41:43.565307 3439 scope.go:117] "RemoveContainer" containerID="61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67" Mar 19 11:41:43.566972 containerd[1767]: time="2025-03-19T11:41:43.566939696Z" level=info msg="RemoveContainer for \"61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67\"" Mar 19 11:41:43.655972 containerd[1767]: time="2025-03-19T11:41:43.655730132Z" level=info msg="RemoveContainer for \"61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67\" returns successfully" Mar 19 11:41:43.656106 kubelet[3439]: I0319 11:41:43.656016 3439 scope.go:117] "RemoveContainer" containerID="629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4" Mar 19 11:41:43.657604 containerd[1767]: time="2025-03-19T11:41:43.657326300Z" level=info msg="RemoveContainer for \"629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4\"" Mar 19 11:41:43.722863 containerd[1767]: time="2025-03-19T11:41:43.722533329Z" level=info msg="RemoveContainer for \"629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4\" returns successfully" Mar 19 11:41:43.723011 kubelet[3439]: I0319 11:41:43.722774 3439 scope.go:117] "RemoveContainer" containerID="44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca" Mar 19 11:41:43.724193 containerd[1767]: time="2025-03-19T11:41:43.724161218Z" level=info msg="RemoveContainer for \"44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca\"" Mar 19 11:41:43.732381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-877f59bc4303e0a8a0e334b3b4d69bafb17b493baa2a8a9d9c51aae8bd934441-rootfs.mount: Deactivated successfully. Mar 19 11:41:43.732792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9d59b3fd0ae8a1ae490a2bfd890f92bfc33908cd1fe321b6065a7c1b46178e4-rootfs.mount: Deactivated successfully. Mar 19 11:41:43.732965 systemd[1]: var-lib-kubelet-pods-20a011e1\x2d04e2\x2d4f67\x2db8c7\x2d4ebf54237e33-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh9m69.mount: Deactivated successfully. Mar 19 11:41:43.733101 systemd[1]: var-lib-kubelet-pods-f81a79ea\x2d67e0\x2d45b0\x2d83f8\x2d04f11ab82494-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk5wfb.mount: Deactivated successfully. Mar 19 11:41:43.733228 systemd[1]: var-lib-kubelet-pods-f81a79ea\x2d67e0\x2d45b0\x2d83f8\x2d04f11ab82494-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 19 11:41:43.733385 systemd[1]: var-lib-kubelet-pods-f81a79ea\x2d67e0\x2d45b0\x2d83f8\x2d04f11ab82494-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 19 11:41:43.754962 containerd[1767]: time="2025-03-19T11:41:43.754879743Z" level=info msg="RemoveContainer for \"44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca\" returns successfully" Mar 19 11:41:43.755501 kubelet[3439]: I0319 11:41:43.755269 3439 scope.go:117] "RemoveContainer" containerID="1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71" Mar 19 11:41:43.757207 containerd[1767]: time="2025-03-19T11:41:43.756861553Z" level=info msg="RemoveContainer for \"1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71\"" Mar 19 11:41:43.780702 containerd[1767]: time="2025-03-19T11:41:43.780577080Z" level=info msg="RemoveContainer for \"1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71\" returns successfully" Mar 19 11:41:43.780843 kubelet[3439]: I0319 11:41:43.780819 3439 scope.go:117] "RemoveContainer" containerID="797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c" Mar 19 11:41:43.781443 containerd[1767]: time="2025-03-19T11:41:43.781098283Z" level=error msg="ContainerStatus for \"797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c\": not found" Mar 19 11:41:43.781535 kubelet[3439]: E0319 11:41:43.781266 3439 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c\": not found" containerID="797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c" Mar 19 11:41:43.781535 kubelet[3439]: I0319 11:41:43.781298 3439 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c"} err="failed to get container status \"797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c\": rpc error: code = NotFound desc = an error occurred when try to find container \"797309364361e23cd21abb92b2ff2c092df005ac8c3244ef38e2bf4094f7074c\": not found" Mar 19 11:41:43.781535 kubelet[3439]: I0319 11:41:43.781378 3439 scope.go:117] "RemoveContainer" containerID="61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67" Mar 19 11:41:43.781610 containerd[1767]: time="2025-03-19T11:41:43.781545485Z" level=error msg="ContainerStatus for \"61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67\": not found" Mar 19 11:41:43.781709 kubelet[3439]: E0319 11:41:43.781676 3439 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67\": not found" containerID="61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67" Mar 19 11:41:43.781756 kubelet[3439]: I0319 11:41:43.781711 3439 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67"} err="failed to get container status \"61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67\": rpc error: code = NotFound desc = an error occurred when try to find container \"61d35a237d133547756bcdd45049c67a0ba007ca0b6341a43c4433881b8d5a67\": not found" Mar 19 11:41:43.781756 kubelet[3439]: I0319 11:41:43.781728 3439 scope.go:117] "RemoveContainer" containerID="629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4" Mar 19 11:41:43.782177 kubelet[3439]: E0319 11:41:43.782048 3439 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4\": not found" containerID="629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4" Mar 19 11:41:43.782177 kubelet[3439]: I0319 11:41:43.782073 3439 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4"} err="failed to get container status \"629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4\": not found" Mar 19 11:41:43.782177 kubelet[3439]: I0319 11:41:43.782089 3439 scope.go:117] "RemoveContainer" containerID="44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca" Mar 19 11:41:43.782302 containerd[1767]: time="2025-03-19T11:41:43.781915647Z" level=error msg="ContainerStatus for \"629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"629fe0b36dc2eb22200abdf858f13d9f18764b92978accd1493464d1acda80d4\": not found" Mar 19 11:41:43.782693 containerd[1767]: time="2025-03-19T11:41:43.782443290Z" level=error msg="ContainerStatus for \"44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca\": not found" Mar 19 11:41:43.782757 kubelet[3439]: E0319 11:41:43.782656 3439 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca\": not found" containerID="44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca" Mar 19 11:41:43.783026 kubelet[3439]: I0319 11:41:43.782802 3439 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca"} err="failed to get container status \"44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"44bbcacc8a160231d2904072eee5c3c12f90b58dec5f6fd24d56257c4f0a12ca\": not found" Mar 19 11:41:43.783026 kubelet[3439]: I0319 11:41:43.782826 3439 scope.go:117] "RemoveContainer" containerID="1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71" Mar 19 11:41:43.783304 containerd[1767]: time="2025-03-19T11:41:43.783203694Z" level=error msg="ContainerStatus for \"1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71\": not found" Mar 19 11:41:43.783547 kubelet[3439]: E0319 11:41:43.783369 3439 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71\": not found" containerID="1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71" Mar 19 11:41:43.783547 kubelet[3439]: I0319 11:41:43.783402 3439 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71"} err="failed to get container status \"1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71\": rpc error: code = NotFound desc = an error occurred when try to find container \"1aa99f709e0ef238b2054162a341de759c39c7716c95c6e1bd93a1392a1f9f71\": not found" Mar 19 11:41:43.783547 kubelet[3439]: I0319 11:41:43.783419 3439 scope.go:117] "RemoveContainer" containerID="c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48" Mar 19 11:41:43.785156 containerd[1767]: time="2025-03-19T11:41:43.784874903Z" level=info msg="RemoveContainer for \"c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48\"" Mar 19 11:41:43.798839 containerd[1767]: time="2025-03-19T11:41:43.798719937Z" level=info msg="RemoveContainer for \"c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48\" returns successfully" Mar 19 11:41:43.799219 kubelet[3439]: I0319 11:41:43.798989 3439 scope.go:117] "RemoveContainer" containerID="c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48" Mar 19 11:41:43.799342 containerd[1767]: time="2025-03-19T11:41:43.799272820Z" level=error msg="ContainerStatus for \"c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48\": not found" Mar 19 11:41:43.799477 kubelet[3439]: E0319 11:41:43.799426 3439 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48\": not found" containerID="c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48" Mar 19 11:41:43.799477 kubelet[3439]: I0319 11:41:43.799459 3439 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48"} err="failed to get container status \"c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48\": rpc error: code = NotFound desc = an error occurred when try to find container \"c558448aa2d3d726ff660b293f1f36157a8b91a151dd231b6858eab5a7946a48\": not found" Mar 19 11:41:43.918971 kubelet[3439]: I0319 11:41:43.918063 3439 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20a011e1-04e2-4f67-b8c7-4ebf54237e33" path="/var/lib/kubelet/pods/20a011e1-04e2-4f67-b8c7-4ebf54237e33/volumes" Mar 19 11:41:43.918971 kubelet[3439]: I0319 11:41:43.918493 3439 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f81a79ea-67e0-45b0-83f8-04f11ab82494" path="/var/lib/kubelet/pods/f81a79ea-67e0-45b0-83f8-04f11ab82494/volumes" Mar 19 11:41:44.096633 kubelet[3439]: E0319 11:41:44.096509 3439 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 19 11:41:44.743266 sshd[5196]: Connection closed by 10.200.16.10 port 47618 Mar 19 11:41:44.744042 sshd-session[5194]: pam_unix(sshd:session): session closed for user core Mar 19 11:41:44.748591 systemd[1]: sshd@22-10.200.20.14:22-10.200.16.10:47618.service: Deactivated successfully. Mar 19 11:41:44.752369 systemd[1]: session-25.scope: Deactivated successfully. Mar 19 11:41:44.753803 systemd-logind[1750]: Session 25 logged out. Waiting for processes to exit. Mar 19 11:41:44.755791 systemd-logind[1750]: Removed session 25. Mar 19 11:41:44.832570 systemd[1]: Started sshd@23-10.200.20.14:22-10.200.16.10:47626.service - OpenSSH per-connection server daemon (10.200.16.10:47626). Mar 19 11:41:45.324167 sshd[5356]: Accepted publickey for core from 10.200.16.10 port 47626 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:41:45.325602 sshd-session[5356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:41:45.331373 systemd-logind[1750]: New session 26 of user core. Mar 19 11:41:45.338476 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 19 11:41:46.543280 kubelet[3439]: I0319 11:41:46.543215 3439 memory_manager.go:355] "RemoveStaleState removing state" podUID="20a011e1-04e2-4f67-b8c7-4ebf54237e33" containerName="cilium-operator" Mar 19 11:41:46.543280 kubelet[3439]: I0319 11:41:46.543267 3439 memory_manager.go:355] "RemoveStaleState removing state" podUID="f81a79ea-67e0-45b0-83f8-04f11ab82494" containerName="cilium-agent" Mar 19 11:41:46.551361 systemd[1]: Created slice kubepods-burstable-pod9029e53b_c13d_44f9_9628_491a42a09382.slice - libcontainer container kubepods-burstable-pod9029e53b_c13d_44f9_9628_491a42a09382.slice. Mar 19 11:41:46.561702 sshd[5358]: Connection closed by 10.200.16.10 port 47626 Mar 19 11:41:46.563487 sshd-session[5356]: pam_unix(sshd:session): session closed for user core Mar 19 11:41:46.568367 systemd[1]: sshd@23-10.200.20.14:22-10.200.16.10:47626.service: Deactivated successfully. Mar 19 11:41:46.573202 systemd[1]: session-26.scope: Deactivated successfully. Mar 19 11:41:46.575672 systemd-logind[1750]: Session 26 logged out. Waiting for processes to exit. Mar 19 11:41:46.577085 systemd-logind[1750]: Removed session 26. Mar 19 11:41:46.651998 systemd[1]: Started sshd@24-10.200.20.14:22-10.200.16.10:47636.service - OpenSSH per-connection server daemon (10.200.16.10:47636). Mar 19 11:41:46.695376 kubelet[3439]: I0319 11:41:46.695331 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9029e53b-c13d-44f9-9628-491a42a09382-hostproc\") pod \"cilium-xzm7m\" (UID: \"9029e53b-c13d-44f9-9628-491a42a09382\") " pod="kube-system/cilium-xzm7m" Mar 19 11:41:46.695376 kubelet[3439]: I0319 11:41:46.695378 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9029e53b-c13d-44f9-9628-491a42a09382-etc-cni-netd\") pod \"cilium-xzm7m\" (UID: \"9029e53b-c13d-44f9-9628-491a42a09382\") " pod="kube-system/cilium-xzm7m" Mar 19 11:41:46.695547 kubelet[3439]: I0319 11:41:46.695397 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bs7l\" (UniqueName: \"kubernetes.io/projected/9029e53b-c13d-44f9-9628-491a42a09382-kube-api-access-4bs7l\") pod \"cilium-xzm7m\" (UID: \"9029e53b-c13d-44f9-9628-491a42a09382\") " pod="kube-system/cilium-xzm7m" Mar 19 11:41:46.695547 kubelet[3439]: I0319 11:41:46.695414 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9029e53b-c13d-44f9-9628-491a42a09382-cilium-run\") pod \"cilium-xzm7m\" (UID: \"9029e53b-c13d-44f9-9628-491a42a09382\") " pod="kube-system/cilium-xzm7m" Mar 19 11:41:46.695547 kubelet[3439]: I0319 11:41:46.695428 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9029e53b-c13d-44f9-9628-491a42a09382-xtables-lock\") pod \"cilium-xzm7m\" (UID: \"9029e53b-c13d-44f9-9628-491a42a09382\") " pod="kube-system/cilium-xzm7m" Mar 19 11:41:46.695547 kubelet[3439]: I0319 11:41:46.695444 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9029e53b-c13d-44f9-9628-491a42a09382-clustermesh-secrets\") pod \"cilium-xzm7m\" (UID: \"9029e53b-c13d-44f9-9628-491a42a09382\") " pod="kube-system/cilium-xzm7m" Mar 19 11:41:46.695547 kubelet[3439]: I0319 11:41:46.695463 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9029e53b-c13d-44f9-9628-491a42a09382-host-proc-sys-net\") pod \"cilium-xzm7m\" (UID: \"9029e53b-c13d-44f9-9628-491a42a09382\") " pod="kube-system/cilium-xzm7m" Mar 19 11:41:46.695547 kubelet[3439]: I0319 11:41:46.695483 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9029e53b-c13d-44f9-9628-491a42a09382-hubble-tls\") pod \"cilium-xzm7m\" (UID: \"9029e53b-c13d-44f9-9628-491a42a09382\") " pod="kube-system/cilium-xzm7m" Mar 19 11:41:46.695681 kubelet[3439]: I0319 11:41:46.695501 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9029e53b-c13d-44f9-9628-491a42a09382-cilium-cgroup\") pod \"cilium-xzm7m\" (UID: \"9029e53b-c13d-44f9-9628-491a42a09382\") " pod="kube-system/cilium-xzm7m" Mar 19 11:41:46.695681 kubelet[3439]: I0319 11:41:46.695519 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9029e53b-c13d-44f9-9628-491a42a09382-host-proc-sys-kernel\") pod \"cilium-xzm7m\" (UID: \"9029e53b-c13d-44f9-9628-491a42a09382\") " pod="kube-system/cilium-xzm7m" Mar 19 11:41:46.695681 kubelet[3439]: I0319 11:41:46.695534 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9029e53b-c13d-44f9-9628-491a42a09382-lib-modules\") pod \"cilium-xzm7m\" (UID: \"9029e53b-c13d-44f9-9628-491a42a09382\") " pod="kube-system/cilium-xzm7m" Mar 19 11:41:46.695681 kubelet[3439]: I0319 11:41:46.695549 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9029e53b-c13d-44f9-9628-491a42a09382-cni-path\") pod \"cilium-xzm7m\" (UID: \"9029e53b-c13d-44f9-9628-491a42a09382\") " pod="kube-system/cilium-xzm7m" Mar 19 11:41:46.695681 kubelet[3439]: I0319 11:41:46.695564 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9029e53b-c13d-44f9-9628-491a42a09382-cilium-config-path\") pod \"cilium-xzm7m\" (UID: \"9029e53b-c13d-44f9-9628-491a42a09382\") " pod="kube-system/cilium-xzm7m" Mar 19 11:41:46.695681 kubelet[3439]: I0319 11:41:46.695580 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9029e53b-c13d-44f9-9628-491a42a09382-bpf-maps\") pod \"cilium-xzm7m\" (UID: \"9029e53b-c13d-44f9-9628-491a42a09382\") " pod="kube-system/cilium-xzm7m" Mar 19 11:41:46.695802 kubelet[3439]: I0319 11:41:46.695609 3439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9029e53b-c13d-44f9-9628-491a42a09382-cilium-ipsec-secrets\") pod \"cilium-xzm7m\" (UID: \"9029e53b-c13d-44f9-9628-491a42a09382\") " pod="kube-system/cilium-xzm7m" Mar 19 11:41:46.855632 containerd[1767]: time="2025-03-19T11:41:46.855458829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xzm7m,Uid:9029e53b-c13d-44f9-9628-491a42a09382,Namespace:kube-system,Attempt:0,}" Mar 19 11:41:46.915597 containerd[1767]: time="2025-03-19T11:41:46.914562546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:41:46.915597 containerd[1767]: time="2025-03-19T11:41:46.915284590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:41:46.915597 containerd[1767]: time="2025-03-19T11:41:46.915330030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:41:46.916165 containerd[1767]: time="2025-03-19T11:41:46.915557791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:41:46.934504 systemd[1]: Started cri-containerd-03f93cd240ee4549317af08214bac9c22d5aac045b0a56c7d0c4f70d4b1d87a5.scope - libcontainer container 03f93cd240ee4549317af08214bac9c22d5aac045b0a56c7d0c4f70d4b1d87a5. Mar 19 11:41:46.959370 containerd[1767]: time="2025-03-19T11:41:46.958902143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xzm7m,Uid:9029e53b-c13d-44f9-9628-491a42a09382,Namespace:kube-system,Attempt:0,} returns sandbox id \"03f93cd240ee4549317af08214bac9c22d5aac045b0a56c7d0c4f70d4b1d87a5\"" Mar 19 11:41:46.964739 containerd[1767]: time="2025-03-19T11:41:46.963638129Z" level=info msg="CreateContainer within sandbox \"03f93cd240ee4549317af08214bac9c22d5aac045b0a56c7d0c4f70d4b1d87a5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 19 11:41:47.025177 containerd[1767]: time="2025-03-19T11:41:47.025115298Z" level=info msg="CreateContainer within sandbox \"03f93cd240ee4549317af08214bac9c22d5aac045b0a56c7d0c4f70d4b1d87a5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"310b6be0feb7d2ee11769bb9091741505174add4a74cdf1e299413e8110cfb33\"" Mar 19 11:41:47.026184 containerd[1767]: time="2025-03-19T11:41:47.026143024Z" level=info msg="StartContainer for \"310b6be0feb7d2ee11769bb9091741505174add4a74cdf1e299413e8110cfb33\"" Mar 19 11:41:47.054467 systemd[1]: Started cri-containerd-310b6be0feb7d2ee11769bb9091741505174add4a74cdf1e299413e8110cfb33.scope - libcontainer container 310b6be0feb7d2ee11769bb9091741505174add4a74cdf1e299413e8110cfb33. Mar 19 11:41:47.085571 containerd[1767]: time="2025-03-19T11:41:47.085313861Z" level=info msg="StartContainer for \"310b6be0feb7d2ee11769bb9091741505174add4a74cdf1e299413e8110cfb33\" returns successfully" Mar 19 11:41:47.092349 systemd[1]: cri-containerd-310b6be0feb7d2ee11769bb9091741505174add4a74cdf1e299413e8110cfb33.scope: Deactivated successfully. Mar 19 11:41:47.149869 sshd[5368]: Accepted publickey for core from 10.200.16.10 port 47636 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:41:47.151849 sshd-session[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:41:47.156309 systemd-logind[1750]: New session 27 of user core. Mar 19 11:41:47.162452 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 19 11:41:47.180592 kubelet[3439]: I0319 11:41:47.180288 3439 setters.go:602] "Node became not ready" node="ci-4230.1.0-a-2247daed6b" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-19T11:41:47Z","lastTransitionTime":"2025-03-19T11:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 19 11:41:47.213054 containerd[1767]: time="2025-03-19T11:41:47.212958184Z" level=info msg="shim disconnected" id=310b6be0feb7d2ee11769bb9091741505174add4a74cdf1e299413e8110cfb33 namespace=k8s.io Mar 19 11:41:47.213054 containerd[1767]: time="2025-03-19T11:41:47.213047905Z" level=warning msg="cleaning up after shim disconnected" id=310b6be0feb7d2ee11769bb9091741505174add4a74cdf1e299413e8110cfb33 namespace=k8s.io Mar 19 11:41:47.213054 containerd[1767]: time="2025-03-19T11:41:47.213057985Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:41:47.498609 containerd[1767]: time="2025-03-19T11:41:47.498481033Z" level=info msg="CreateContainer within sandbox \"03f93cd240ee4549317af08214bac9c22d5aac045b0a56c7d0c4f70d4b1d87a5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 19 11:41:47.512112 sshd[5461]: Connection closed by 10.200.16.10 port 47636 Mar 19 11:41:47.512340 sshd-session[5368]: pam_unix(sshd:session): session closed for user core Mar 19 11:41:47.520398 systemd[1]: sshd@24-10.200.20.14:22-10.200.16.10:47636.service: Deactivated successfully. Mar 19 11:41:47.526993 systemd[1]: session-27.scope: Deactivated successfully. Mar 19 11:41:47.531320 systemd-logind[1750]: Session 27 logged out. Waiting for processes to exit. Mar 19 11:41:47.532786 systemd-logind[1750]: Removed session 27. Mar 19 11:41:47.571190 containerd[1767]: time="2025-03-19T11:41:47.571132983Z" level=info msg="CreateContainer within sandbox \"03f93cd240ee4549317af08214bac9c22d5aac045b0a56c7d0c4f70d4b1d87a5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"640f40e3453d991bbc93726fedb8404f4fdfc3e5251c1aab43a0095df5f33098\"" Mar 19 11:41:47.571932 containerd[1767]: time="2025-03-19T11:41:47.571835066Z" level=info msg="StartContainer for \"640f40e3453d991bbc93726fedb8404f4fdfc3e5251c1aab43a0095df5f33098\"" Mar 19 11:41:47.615498 systemd[1]: Started cri-containerd-640f40e3453d991bbc93726fedb8404f4fdfc3e5251c1aab43a0095df5f33098.scope - libcontainer container 640f40e3453d991bbc93726fedb8404f4fdfc3e5251c1aab43a0095df5f33098. Mar 19 11:41:47.620030 systemd[1]: Started sshd@25-10.200.20.14:22-10.200.16.10:47648.service - OpenSSH per-connection server daemon (10.200.16.10:47648). Mar 19 11:41:47.659911 containerd[1767]: time="2025-03-19T11:41:47.659859658Z" level=info msg="StartContainer for \"640f40e3453d991bbc93726fedb8404f4fdfc3e5251c1aab43a0095df5f33098\" returns successfully" Mar 19 11:41:47.663539 systemd[1]: cri-containerd-640f40e3453d991bbc93726fedb8404f4fdfc3e5251c1aab43a0095df5f33098.scope: Deactivated successfully. Mar 19 11:41:47.709164 containerd[1767]: time="2025-03-19T11:41:47.708876080Z" level=info msg="shim disconnected" id=640f40e3453d991bbc93726fedb8404f4fdfc3e5251c1aab43a0095df5f33098 namespace=k8s.io Mar 19 11:41:47.709164 containerd[1767]: time="2025-03-19T11:41:47.708977761Z" level=warning msg="cleaning up after shim disconnected" id=640f40e3453d991bbc93726fedb8404f4fdfc3e5251c1aab43a0095df5f33098 namespace=k8s.io Mar 19 11:41:47.709164 containerd[1767]: time="2025-03-19T11:41:47.708988281Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:41:48.126110 sshd[5497]: Accepted publickey for core from 10.200.16.10 port 47648 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:41:48.127675 sshd-session[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:41:48.132373 systemd-logind[1750]: New session 28 of user core. Mar 19 11:41:48.143478 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 19 11:41:48.502757 containerd[1767]: time="2025-03-19T11:41:48.502426291Z" level=info msg="CreateContainer within sandbox \"03f93cd240ee4549317af08214bac9c22d5aac045b0a56c7d0c4f70d4b1d87a5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 19 11:41:48.586760 containerd[1767]: time="2025-03-19T11:41:48.586703062Z" level=info msg="CreateContainer within sandbox \"03f93cd240ee4549317af08214bac9c22d5aac045b0a56c7d0c4f70d4b1d87a5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"90993ed629a04381107fffe1728c12e657b057c5bdc2e3b9a71766ca98409f99\"" Mar 19 11:41:48.587743 containerd[1767]: time="2025-03-19T11:41:48.587696267Z" level=info msg="StartContainer for \"90993ed629a04381107fffe1728c12e657b057c5bdc2e3b9a71766ca98409f99\"" Mar 19 11:41:48.624451 systemd[1]: Started cri-containerd-90993ed629a04381107fffe1728c12e657b057c5bdc2e3b9a71766ca98409f99.scope - libcontainer container 90993ed629a04381107fffe1728c12e657b057c5bdc2e3b9a71766ca98409f99. Mar 19 11:41:48.662187 systemd[1]: cri-containerd-90993ed629a04381107fffe1728c12e657b057c5bdc2e3b9a71766ca98409f99.scope: Deactivated successfully. Mar 19 11:41:48.668854 containerd[1767]: time="2025-03-19T11:41:48.668745781Z" level=info msg="StartContainer for \"90993ed629a04381107fffe1728c12e657b057c5bdc2e3b9a71766ca98409f99\" returns successfully" Mar 19 11:41:48.724277 containerd[1767]: time="2025-03-19T11:41:48.723913277Z" level=info msg="shim disconnected" id=90993ed629a04381107fffe1728c12e657b057c5bdc2e3b9a71766ca98409f99 namespace=k8s.io Mar 19 11:41:48.724277 containerd[1767]: time="2025-03-19T11:41:48.724135798Z" level=warning msg="cleaning up after shim disconnected" id=90993ed629a04381107fffe1728c12e657b057c5bdc2e3b9a71766ca98409f99 namespace=k8s.io Mar 19 11:41:48.724277 containerd[1767]: time="2025-03-19T11:41:48.724151318Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:41:48.802066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90993ed629a04381107fffe1728c12e657b057c5bdc2e3b9a71766ca98409f99-rootfs.mount: Deactivated successfully. Mar 19 11:41:49.098233 kubelet[3439]: E0319 11:41:49.098112 3439 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 19 11:41:49.506431 containerd[1767]: time="2025-03-19T11:41:49.506385366Z" level=info msg="CreateContainer within sandbox \"03f93cd240ee4549317af08214bac9c22d5aac045b0a56c7d0c4f70d4b1d87a5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 19 11:41:49.572701 containerd[1767]: time="2025-03-19T11:41:49.572595203Z" level=info msg="CreateContainer within sandbox \"03f93cd240ee4549317af08214bac9c22d5aac045b0a56c7d0c4f70d4b1d87a5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"369c3b705e76c6553e68918e67931bcfec03d0d0965b032f2d13fe1b49fa58bd\"" Mar 19 11:41:49.574533 containerd[1767]: time="2025-03-19T11:41:49.573233646Z" level=info msg="StartContainer for \"369c3b705e76c6553e68918e67931bcfec03d0d0965b032f2d13fe1b49fa58bd\"" Mar 19 11:41:49.602477 systemd[1]: Started cri-containerd-369c3b705e76c6553e68918e67931bcfec03d0d0965b032f2d13fe1b49fa58bd.scope - libcontainer container 369c3b705e76c6553e68918e67931bcfec03d0d0965b032f2d13fe1b49fa58bd. Mar 19 11:41:49.627310 systemd[1]: cri-containerd-369c3b705e76c6553e68918e67931bcfec03d0d0965b032f2d13fe1b49fa58bd.scope: Deactivated successfully. Mar 19 11:41:49.633793 containerd[1767]: time="2025-03-19T11:41:49.633738972Z" level=info msg="StartContainer for \"369c3b705e76c6553e68918e67931bcfec03d0d0965b032f2d13fe1b49fa58bd\" returns successfully" Mar 19 11:41:49.695941 containerd[1767]: time="2025-03-19T11:41:49.695869787Z" level=info msg="shim disconnected" id=369c3b705e76c6553e68918e67931bcfec03d0d0965b032f2d13fe1b49fa58bd namespace=k8s.io Mar 19 11:41:49.695941 containerd[1767]: time="2025-03-19T11:41:49.695928227Z" level=warning msg="cleaning up after shim disconnected" id=369c3b705e76c6553e68918e67931bcfec03d0d0965b032f2d13fe1b49fa58bd namespace=k8s.io Mar 19 11:41:49.695941 containerd[1767]: time="2025-03-19T11:41:49.695938747Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:41:49.802163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-369c3b705e76c6553e68918e67931bcfec03d0d0965b032f2d13fe1b49fa58bd-rootfs.mount: Deactivated successfully. Mar 19 11:41:50.515720 containerd[1767]: time="2025-03-19T11:41:50.515583484Z" level=info msg="CreateContainer within sandbox \"03f93cd240ee4549317af08214bac9c22d5aac045b0a56c7d0c4f70d4b1d87a5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 19 11:41:50.603611 containerd[1767]: time="2025-03-19T11:41:50.603549798Z" level=info msg="CreateContainer within sandbox \"03f93cd240ee4549317af08214bac9c22d5aac045b0a56c7d0c4f70d4b1d87a5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"606ef8fa8698727a47bba55fa3ae89421e1ba22790489f39a7ed8e43c8b3b1f3\"" Mar 19 11:41:50.604565 containerd[1767]: time="2025-03-19T11:41:50.604284682Z" level=info msg="StartContainer for \"606ef8fa8698727a47bba55fa3ae89421e1ba22790489f39a7ed8e43c8b3b1f3\"" Mar 19 11:41:50.639534 systemd[1]: Started cri-containerd-606ef8fa8698727a47bba55fa3ae89421e1ba22790489f39a7ed8e43c8b3b1f3.scope - libcontainer container 606ef8fa8698727a47bba55fa3ae89421e1ba22790489f39a7ed8e43c8b3b1f3. Mar 19 11:41:50.674631 containerd[1767]: time="2025-03-19T11:41:50.671913606Z" level=info msg="StartContainer for \"606ef8fa8698727a47bba55fa3ae89421e1ba22790489f39a7ed8e43c8b3b1f3\" returns successfully" Mar 19 11:41:51.130270 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 19 11:41:52.591079 systemd[1]: run-containerd-runc-k8s.io-606ef8fa8698727a47bba55fa3ae89421e1ba22790489f39a7ed8e43c8b3b1f3-runc.3jP6gy.mount: Deactivated successfully. Mar 19 11:41:53.991598 systemd-networkd[1479]: lxc_health: Link UP Mar 19 11:41:54.010668 systemd-networkd[1479]: lxc_health: Gained carrier Mar 19 11:41:54.882939 kubelet[3439]: I0319 11:41:54.882868 3439 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xzm7m" podStartSLOduration=8.882852175 podStartE2EDuration="8.882852175s" podCreationTimestamp="2025-03-19 11:41:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:41:51.53748955 +0000 UTC m=+217.748308933" watchObservedRunningTime="2025-03-19 11:41:54.882852175 +0000 UTC m=+221.093671558" Mar 19 11:41:56.065387 systemd-networkd[1479]: lxc_health: Gained IPv6LL Mar 19 11:41:56.994159 kubelet[3439]: E0319 11:41:56.994112 3439 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:42330->127.0.0.1:44985: read tcp 127.0.0.1:42330->127.0.0.1:44985: read: connection reset by peer Mar 19 11:41:59.095164 systemd[1]: run-containerd-runc-k8s.io-606ef8fa8698727a47bba55fa3ae89421e1ba22790489f39a7ed8e43c8b3b1f3-runc.YB6CTU.mount: Deactivated successfully. Mar 19 11:41:59.238276 sshd[5541]: Connection closed by 10.200.16.10 port 47648 Mar 19 11:41:59.238906 sshd-session[5497]: pam_unix(sshd:session): session closed for user core Mar 19 11:41:59.242740 systemd[1]: sshd@25-10.200.20.14:22-10.200.16.10:47648.service: Deactivated successfully. Mar 19 11:41:59.244712 systemd[1]: session-28.scope: Deactivated successfully. Mar 19 11:41:59.245523 systemd-logind[1750]: Session 28 logged out. Waiting for processes to exit. Mar 19 11:41:59.246682 systemd-logind[1750]: Removed session 28.