Mar 25 01:16:20.289557 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 25 01:16:20.289579 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Mon Mar 24 23:39:14 -00 2025 Mar 25 01:16:20.289587 kernel: KASLR enabled Mar 25 01:16:20.289593 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 25 01:16:20.289600 kernel: printk: bootconsole [pl11] enabled Mar 25 01:16:20.289605 kernel: efi: EFI v2.7 by EDK II Mar 25 01:16:20.289612 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f210698 RNG=0x3fd5f998 MEMRESERVE=0x3e471598 Mar 25 01:16:20.289618 kernel: random: crng init done Mar 25 01:16:20.289624 kernel: secureboot: Secure boot disabled Mar 25 01:16:20.289629 kernel: ACPI: Early table checksum verification disabled Mar 25 01:16:20.289635 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 25 01:16:20.289640 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:16:20.289646 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:16:20.289653 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 25 01:16:20.289661 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:16:20.289667 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:16:20.289673 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:16:20.289680 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:16:20.289686 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:16:20.289692 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:16:20.289698 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 25 01:16:20.289704 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:16:20.289710 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 25 01:16:20.289716 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 25 01:16:20.289722 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 25 01:16:20.289728 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 25 01:16:20.289734 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 25 01:16:20.289740 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 25 01:16:20.289747 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 25 01:16:20.289753 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 25 01:16:20.289759 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 25 01:16:20.289765 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 25 01:16:20.289771 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 25 01:16:20.289777 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 25 01:16:20.289782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 25 01:16:20.289788 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Mar 25 01:16:20.289794 kernel: Zone ranges: Mar 25 01:16:20.289800 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 25 01:16:20.289806 kernel: DMA32 empty Mar 25 01:16:20.289812 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 25 01:16:20.289822 kernel: Movable zone start for each node Mar 25 01:16:20.289828 kernel: Early memory node ranges Mar 25 01:16:20.289834 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 25 01:16:20.289841 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Mar 25 01:16:20.289847 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Mar 25 01:16:20.289855 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Mar 25 01:16:20.289861 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 25 01:16:20.289868 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 25 01:16:20.289874 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 25 01:16:20.289880 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 25 01:16:20.289886 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 25 01:16:20.289893 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 25 01:16:20.289899 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 25 01:16:20.289905 kernel: psci: probing for conduit method from ACPI. Mar 25 01:16:20.289912 kernel: psci: PSCIv1.1 detected in firmware. Mar 25 01:16:20.289918 kernel: psci: Using standard PSCI v0.2 function IDs Mar 25 01:16:20.289924 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 25 01:16:20.289932 kernel: psci: SMC Calling Convention v1.4 Mar 25 01:16:20.289938 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 25 01:16:20.289944 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 25 01:16:20.289951 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 25 01:16:20.289957 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 25 01:16:20.289963 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 25 01:16:20.289970 kernel: Detected PIPT I-cache on CPU0 Mar 25 01:16:20.289976 kernel: CPU features: detected: GIC system register CPU interface Mar 25 01:16:20.289982 kernel: CPU features: detected: Hardware dirty bit management Mar 25 01:16:20.289989 kernel: CPU features: detected: Spectre-BHB Mar 25 01:16:20.289995 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 25 01:16:20.290003 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 25 01:16:20.290009 kernel: CPU features: detected: ARM erratum 1418040 Mar 25 01:16:20.290015 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 25 01:16:20.290022 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 25 01:16:20.290028 kernel: alternatives: applying boot alternatives Mar 25 01:16:20.290036 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=b84e5f613acd6cd0a8a878f32f5653a14f2e6fb2820997fecd5b2bd33a4ba3ab Mar 25 01:16:20.290042 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 25 01:16:20.290049 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 25 01:16:20.290055 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 25 01:16:20.290062 kernel: Fallback order for Node 0: 0 Mar 25 01:16:20.290068 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 25 01:16:20.290075 kernel: Policy zone: Normal Mar 25 01:16:20.292136 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 25 01:16:20.292146 kernel: software IO TLB: area num 2. Mar 25 01:16:20.292152 kernel: software IO TLB: mapped [mem 0x0000000031590000-0x0000000035590000] (64MB) Mar 25 01:16:20.292160 kernel: Memory: 3983524K/4194160K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38464K init, 897K bss, 210636K reserved, 0K cma-reserved) Mar 25 01:16:20.292167 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 25 01:16:20.292173 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 25 01:16:20.292180 kernel: rcu: RCU event tracing is enabled. Mar 25 01:16:20.292187 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 25 01:16:20.292194 kernel: Trampoline variant of Tasks RCU enabled. Mar 25 01:16:20.292201 kernel: Tracing variant of Tasks RCU enabled. Mar 25 01:16:20.292214 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 25 01:16:20.292220 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 25 01:16:20.292227 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 25 01:16:20.292233 kernel: GICv3: 960 SPIs implemented Mar 25 01:16:20.292240 kernel: GICv3: 0 Extended SPIs implemented Mar 25 01:16:20.292246 kernel: Root IRQ handler: gic_handle_irq Mar 25 01:16:20.292252 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 25 01:16:20.292259 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 25 01:16:20.292265 kernel: ITS: No ITS available, not enabling LPIs Mar 25 01:16:20.292272 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 25 01:16:20.292279 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 25 01:16:20.292285 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 25 01:16:20.292294 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 25 01:16:20.292300 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 25 01:16:20.292307 kernel: Console: colour dummy device 80x25 Mar 25 01:16:20.292314 kernel: printk: console [tty1] enabled Mar 25 01:16:20.292321 kernel: ACPI: Core revision 20230628 Mar 25 01:16:20.292328 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 25 01:16:20.292334 kernel: pid_max: default: 32768 minimum: 301 Mar 25 01:16:20.292341 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 25 01:16:20.292348 kernel: landlock: Up and running. Mar 25 01:16:20.292356 kernel: SELinux: Initializing. Mar 25 01:16:20.292363 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 25 01:16:20.292369 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 25 01:16:20.292376 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:16:20.292383 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:16:20.292390 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Mar 25 01:16:20.292397 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Mar 25 01:16:20.292410 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 25 01:16:20.292417 kernel: rcu: Hierarchical SRCU implementation. Mar 25 01:16:20.292424 kernel: rcu: Max phase no-delay instances is 400. Mar 25 01:16:20.292432 kernel: Remapping and enabling EFI services. Mar 25 01:16:20.292439 kernel: smp: Bringing up secondary CPUs ... Mar 25 01:16:20.292447 kernel: Detected PIPT I-cache on CPU1 Mar 25 01:16:20.292454 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 25 01:16:20.292461 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 25 01:16:20.292468 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 25 01:16:20.292475 kernel: smp: Brought up 1 node, 2 CPUs Mar 25 01:16:20.292483 kernel: SMP: Total of 2 processors activated. Mar 25 01:16:20.292491 kernel: CPU features: detected: 32-bit EL0 Support Mar 25 01:16:20.292498 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 25 01:16:20.292505 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 25 01:16:20.292512 kernel: CPU features: detected: CRC32 instructions Mar 25 01:16:20.292519 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 25 01:16:20.292526 kernel: CPU features: detected: LSE atomic instructions Mar 25 01:16:20.292533 kernel: CPU features: detected: Privileged Access Never Mar 25 01:16:20.292540 kernel: CPU: All CPU(s) started at EL1 Mar 25 01:16:20.292548 kernel: alternatives: applying system-wide alternatives Mar 25 01:16:20.292555 kernel: devtmpfs: initialized Mar 25 01:16:20.292562 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 25 01:16:20.292570 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 25 01:16:20.292576 kernel: pinctrl core: initialized pinctrl subsystem Mar 25 01:16:20.292583 kernel: SMBIOS 3.1.0 present. Mar 25 01:16:20.292590 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 25 01:16:20.292598 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 25 01:16:20.292605 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 25 01:16:20.292613 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 25 01:16:20.292621 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 25 01:16:20.292628 kernel: audit: initializing netlink subsys (disabled) Mar 25 01:16:20.292635 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 25 01:16:20.292642 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 25 01:16:20.292649 kernel: cpuidle: using governor menu Mar 25 01:16:20.292656 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 25 01:16:20.292663 kernel: ASID allocator initialised with 32768 entries Mar 25 01:16:20.292670 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 25 01:16:20.292678 kernel: Serial: AMBA PL011 UART driver Mar 25 01:16:20.292685 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 25 01:16:20.292692 kernel: Modules: 0 pages in range for non-PLT usage Mar 25 01:16:20.292699 kernel: Modules: 509248 pages in range for PLT usage Mar 25 01:16:20.292706 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 25 01:16:20.292713 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 25 01:16:20.292721 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 25 01:16:20.292728 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 25 01:16:20.292734 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 25 01:16:20.292743 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 25 01:16:20.292750 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 25 01:16:20.292757 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 25 01:16:20.292764 kernel: ACPI: Added _OSI(Module Device) Mar 25 01:16:20.292771 kernel: ACPI: Added _OSI(Processor Device) Mar 25 01:16:20.292778 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 25 01:16:20.292785 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 25 01:16:20.292792 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 25 01:16:20.292799 kernel: ACPI: Interpreter enabled Mar 25 01:16:20.292807 kernel: ACPI: Using GIC for interrupt routing Mar 25 01:16:20.292814 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 25 01:16:20.292821 kernel: printk: console [ttyAMA0] enabled Mar 25 01:16:20.292828 kernel: printk: bootconsole [pl11] disabled Mar 25 01:16:20.292835 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 25 01:16:20.292842 kernel: iommu: Default domain type: Translated Mar 25 01:16:20.292849 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 25 01:16:20.292856 kernel: efivars: Registered efivars operations Mar 25 01:16:20.292863 kernel: vgaarb: loaded Mar 25 01:16:20.292872 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 25 01:16:20.292879 kernel: VFS: Disk quotas dquot_6.6.0 Mar 25 01:16:20.292886 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 25 01:16:20.292893 kernel: pnp: PnP ACPI init Mar 25 01:16:20.292899 kernel: pnp: PnP ACPI: found 0 devices Mar 25 01:16:20.292906 kernel: NET: Registered PF_INET protocol family Mar 25 01:16:20.292913 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 25 01:16:20.292920 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 25 01:16:20.292928 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 25 01:16:20.292936 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 25 01:16:20.292943 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 25 01:16:20.292951 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 25 01:16:20.292958 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 25 01:16:20.292965 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 25 01:16:20.292972 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 25 01:16:20.292979 kernel: PCI: CLS 0 bytes, default 64 Mar 25 01:16:20.292986 kernel: kvm [1]: HYP mode not available Mar 25 01:16:20.292993 kernel: Initialise system trusted keyrings Mar 25 01:16:20.293001 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 25 01:16:20.293008 kernel: Key type asymmetric registered Mar 25 01:16:20.293015 kernel: Asymmetric key parser 'x509' registered Mar 25 01:16:20.293022 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 25 01:16:20.293029 kernel: io scheduler mq-deadline registered Mar 25 01:16:20.293035 kernel: io scheduler kyber registered Mar 25 01:16:20.293042 kernel: io scheduler bfq registered Mar 25 01:16:20.293050 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 25 01:16:20.293056 kernel: thunder_xcv, ver 1.0 Mar 25 01:16:20.293065 kernel: thunder_bgx, ver 1.0 Mar 25 01:16:20.293072 kernel: nicpf, ver 1.0 Mar 25 01:16:20.293091 kernel: nicvf, ver 1.0 Mar 25 01:16:20.293244 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 25 01:16:20.293318 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-25T01:16:19 UTC (1742865379) Mar 25 01:16:20.293328 kernel: efifb: probing for efifb Mar 25 01:16:20.293336 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 25 01:16:20.293343 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 25 01:16:20.293354 kernel: efifb: scrolling: redraw Mar 25 01:16:20.293360 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 25 01:16:20.293367 kernel: Console: switching to colour frame buffer device 128x48 Mar 25 01:16:20.293374 kernel: fb0: EFI VGA frame buffer device Mar 25 01:16:20.293381 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 25 01:16:20.293388 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 25 01:16:20.293395 kernel: No ACPI PMU IRQ for CPU0 Mar 25 01:16:20.293402 kernel: No ACPI PMU IRQ for CPU1 Mar 25 01:16:20.293409 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Mar 25 01:16:20.293418 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 25 01:16:20.293425 kernel: watchdog: Hard watchdog permanently disabled Mar 25 01:16:20.293432 kernel: NET: Registered PF_INET6 protocol family Mar 25 01:16:20.293438 kernel: Segment Routing with IPv6 Mar 25 01:16:20.293446 kernel: In-situ OAM (IOAM) with IPv6 Mar 25 01:16:20.293452 kernel: NET: Registered PF_PACKET protocol family Mar 25 01:16:20.293459 kernel: Key type dns_resolver registered Mar 25 01:16:20.293466 kernel: registered taskstats version 1 Mar 25 01:16:20.293473 kernel: Loading compiled-in X.509 certificates Mar 25 01:16:20.293481 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: ed4ababe871f0afac8b4236504477de11a6baf07' Mar 25 01:16:20.293488 kernel: Key type .fscrypt registered Mar 25 01:16:20.293495 kernel: Key type fscrypt-provisioning registered Mar 25 01:16:20.293502 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 25 01:16:20.293509 kernel: ima: Allocated hash algorithm: sha1 Mar 25 01:16:20.293516 kernel: ima: No architecture policies found Mar 25 01:16:20.293523 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 25 01:16:20.293530 kernel: clk: Disabling unused clocks Mar 25 01:16:20.293537 kernel: Freeing unused kernel memory: 38464K Mar 25 01:16:20.293545 kernel: Run /init as init process Mar 25 01:16:20.293552 kernel: with arguments: Mar 25 01:16:20.293559 kernel: /init Mar 25 01:16:20.293565 kernel: with environment: Mar 25 01:16:20.293572 kernel: HOME=/ Mar 25 01:16:20.293579 kernel: TERM=linux Mar 25 01:16:20.293586 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 25 01:16:20.293594 systemd[1]: Successfully made /usr/ read-only. Mar 25 01:16:20.293605 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:16:20.293614 systemd[1]: Detected virtualization microsoft. Mar 25 01:16:20.293621 systemd[1]: Detected architecture arm64. Mar 25 01:16:20.293628 systemd[1]: Running in initrd. Mar 25 01:16:20.293636 systemd[1]: No hostname configured, using default hostname. Mar 25 01:16:20.293643 systemd[1]: Hostname set to . Mar 25 01:16:20.293651 systemd[1]: Initializing machine ID from random generator. Mar 25 01:16:20.293658 systemd[1]: Queued start job for default target initrd.target. Mar 25 01:16:20.293667 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:16:20.293675 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:16:20.293683 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 25 01:16:20.293690 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:16:20.293698 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 25 01:16:20.293706 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 25 01:16:20.293715 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 25 01:16:20.293724 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 25 01:16:20.293732 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:16:20.293739 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:16:20.293747 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:16:20.293754 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:16:20.293762 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:16:20.293769 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:16:20.293777 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:16:20.293786 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:16:20.293794 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 25 01:16:20.293802 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 25 01:16:20.293809 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:16:20.293817 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:16:20.293825 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:16:20.293832 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:16:20.293840 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 25 01:16:20.293848 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:16:20.293857 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 25 01:16:20.293864 systemd[1]: Starting systemd-fsck-usr.service... Mar 25 01:16:20.293872 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:16:20.293879 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:16:20.293904 systemd-journald[217]: Collecting audit messages is disabled. Mar 25 01:16:20.293925 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:16:20.293934 systemd-journald[217]: Journal started Mar 25 01:16:20.293952 systemd-journald[217]: Runtime Journal (/run/log/journal/ca96ac75874b42778647a0d3ffb3e7b8) is 8M, max 78.5M, 70.5M free. Mar 25 01:16:20.302009 systemd-modules-load[219]: Inserted module 'overlay' Mar 25 01:16:20.318070 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:16:20.318752 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 25 01:16:20.332337 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:16:20.362098 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 25 01:16:20.362119 kernel: Bridge firewalling registered Mar 25 01:16:20.352548 systemd-modules-load[219]: Inserted module 'br_netfilter' Mar 25 01:16:20.357388 systemd[1]: Finished systemd-fsck-usr.service. Mar 25 01:16:20.368103 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:16:20.381117 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:16:20.397212 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:16:20.406309 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:16:20.430756 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 01:16:20.444207 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:16:20.471165 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:16:20.478991 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:16:20.487858 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:16:20.502141 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:16:20.516341 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:16:20.534959 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 25 01:16:20.563609 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:16:20.571446 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:16:20.602791 dracut-cmdline[254]: dracut-dracut-053 Mar 25 01:16:20.608452 dracut-cmdline[254]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=b84e5f613acd6cd0a8a878f32f5653a14f2e6fb2820997fecd5b2bd33a4ba3ab Mar 25 01:16:20.642537 systemd-resolved[255]: Positive Trust Anchors: Mar 25 01:16:20.642555 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:16:20.642590 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:16:20.644919 systemd-resolved[255]: Defaulting to hostname 'linux'. Mar 25 01:16:20.645785 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:16:20.661150 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:16:20.759099 kernel: SCSI subsystem initialized Mar 25 01:16:20.766105 kernel: Loading iSCSI transport class v2.0-870. Mar 25 01:16:20.776114 kernel: iscsi: registered transport (tcp) Mar 25 01:16:20.794058 kernel: iscsi: registered transport (qla4xxx) Mar 25 01:16:20.794122 kernel: QLogic iSCSI HBA Driver Mar 25 01:16:20.831063 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 25 01:16:20.839207 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 25 01:16:20.885483 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 25 01:16:20.885539 kernel: device-mapper: uevent: version 1.0.3 Mar 25 01:16:20.891516 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 25 01:16:20.940105 kernel: raid6: neonx8 gen() 15763 MB/s Mar 25 01:16:20.960098 kernel: raid6: neonx4 gen() 15826 MB/s Mar 25 01:16:20.980087 kernel: raid6: neonx2 gen() 13217 MB/s Mar 25 01:16:21.001089 kernel: raid6: neonx1 gen() 10511 MB/s Mar 25 01:16:21.021088 kernel: raid6: int64x8 gen() 6796 MB/s Mar 25 01:16:21.041091 kernel: raid6: int64x4 gen() 7349 MB/s Mar 25 01:16:21.062095 kernel: raid6: int64x2 gen() 6115 MB/s Mar 25 01:16:21.085640 kernel: raid6: int64x1 gen() 5061 MB/s Mar 25 01:16:21.085655 kernel: raid6: using algorithm neonx4 gen() 15826 MB/s Mar 25 01:16:21.109530 kernel: raid6: .... xor() 12359 MB/s, rmw enabled Mar 25 01:16:21.109543 kernel: raid6: using neon recovery algorithm Mar 25 01:16:21.118091 kernel: xor: measuring software checksum speed Mar 25 01:16:21.124496 kernel: 8regs : 20464 MB/sec Mar 25 01:16:21.124509 kernel: 32regs : 21670 MB/sec Mar 25 01:16:21.127833 kernel: arm64_neon : 27898 MB/sec Mar 25 01:16:21.131677 kernel: xor: using function: arm64_neon (27898 MB/sec) Mar 25 01:16:21.181100 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 25 01:16:21.190014 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:16:21.201699 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:16:21.240348 systemd-udevd[438]: Using default interface naming scheme 'v255'. Mar 25 01:16:21.245890 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:16:21.261893 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 25 01:16:21.290260 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Mar 25 01:16:21.319060 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:16:21.326653 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:16:21.383178 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:16:21.396217 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 25 01:16:21.433123 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 25 01:16:21.450980 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:16:21.459158 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:16:21.481364 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:16:21.491207 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 25 01:16:21.523978 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:16:21.536195 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:16:21.536294 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:16:21.550276 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:16:21.563854 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:16:21.598701 kernel: hv_vmbus: Vmbus version:5.3 Mar 25 01:16:21.601138 kernel: hv_vmbus: registering driver hid_hyperv Mar 25 01:16:21.601158 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 25 01:16:21.601168 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 25 01:16:21.564055 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:16:21.858393 kernel: PTP clock support registered Mar 25 01:16:21.858417 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 25 01:16:21.858427 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 25 01:16:21.858438 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 25 01:16:21.858485 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 25 01:16:21.858626 kernel: hv_vmbus: registering driver hv_netvsc Mar 25 01:16:21.858637 kernel: hv_vmbus: registering driver hv_storvsc Mar 25 01:16:21.858645 kernel: hv_utils: Registering HyperV Utility Driver Mar 25 01:16:21.858654 kernel: hv_vmbus: registering driver hv_utils Mar 25 01:16:21.858667 kernel: scsi host1: storvsc_host_t Mar 25 01:16:21.858763 kernel: hv_utils: Heartbeat IC version 3.0 Mar 25 01:16:21.858905 kernel: scsi host0: storvsc_host_t Mar 25 01:16:21.859033 kernel: hv_utils: Shutdown IC version 3.2 Mar 25 01:16:21.859044 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 25 01:16:21.859139 kernel: hv_utils: TimeSync IC version 4.0 Mar 25 01:16:21.859152 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 25 01:16:21.859245 kernel: hv_netvsc 000d3a07-a374-000d-3a07-a374000d3a07 eth0: VF slot 1 added Mar 25 01:16:21.578061 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:16:21.593538 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:16:21.906234 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 25 01:16:21.914584 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 25 01:16:21.914598 kernel: hv_vmbus: registering driver hv_pci Mar 25 01:16:21.914607 kernel: hv_pci 09777d47-e0a7-475e-afb7-79978f087956: PCI VMBus probing: Using version 0x10004 Mar 25 01:16:22.024286 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 25 01:16:22.024858 kernel: hv_pci 09777d47-e0a7-475e-afb7-79978f087956: PCI host bridge to bus e0a7:00 Mar 25 01:16:22.024981 kernel: pci_bus e0a7:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 25 01:16:22.026565 kernel: pci_bus e0a7:00: No busn resource found for root bus, will use [bus 00-ff] Mar 25 01:16:22.026680 kernel: pci e0a7:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 25 01:16:22.026794 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 25 01:16:22.036759 kernel: pci e0a7:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 25 01:16:22.036878 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 25 01:16:22.036963 kernel: pci e0a7:00:02.0: enabling Extended Tags Mar 25 01:16:22.037050 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 25 01:16:22.037144 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 25 01:16:22.037225 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 25 01:16:22.037305 kernel: pci e0a7:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e0a7:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 25 01:16:22.037385 kernel: pci_bus e0a7:00: busn_res: [bus 00-ff] end is updated to 00 Mar 25 01:16:22.037993 kernel: pci e0a7:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 25 01:16:22.038084 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 25 01:16:22.038094 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 25 01:16:21.778439 systemd-resolved[255]: Clock change detected. Flushing caches. Mar 25 01:16:21.847918 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:16:21.860277 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:16:21.860354 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:16:21.875577 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:16:21.928258 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:16:21.941582 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:16:22.037016 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:16:22.107457 kernel: mlx5_core e0a7:00:02.0: enabling device (0000 -> 0002) Mar 25 01:16:22.325372 kernel: mlx5_core e0a7:00:02.0: firmware version: 16.30.1284 Mar 25 01:16:22.325867 kernel: hv_netvsc 000d3a07-a374-000d-3a07-a374000d3a07 eth0: VF registering: eth1 Mar 25 01:16:22.325964 kernel: mlx5_core e0a7:00:02.0 eth1: joined to eth0 Mar 25 01:16:22.326064 kernel: mlx5_core e0a7:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 25 01:16:22.333462 kernel: mlx5_core e0a7:00:02.0 enP57511s1: renamed from eth1 Mar 25 01:16:22.680032 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 25 01:16:22.729167 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 25 01:16:22.748961 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (498) Mar 25 01:16:22.753376 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 25 01:16:22.775468 kernel: BTRFS: device fsid bf348154-9cb1-474d-801c-0e035a5758cf devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (489) Mar 25 01:16:22.790360 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 25 01:16:22.797143 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 25 01:16:22.812609 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 25 01:16:22.851782 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 25 01:16:22.861357 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 25 01:16:23.869564 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 25 01:16:23.871313 disk-uuid[602]: The operation has completed successfully. Mar 25 01:16:23.932317 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 25 01:16:23.932428 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 25 01:16:23.967152 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 25 01:16:23.990014 sh[688]: Success Mar 25 01:16:24.020485 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 25 01:16:24.238992 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 25 01:16:24.250194 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 25 01:16:24.267229 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 25 01:16:24.298033 kernel: BTRFS info (device dm-0): first mount of filesystem bf348154-9cb1-474d-801c-0e035a5758cf Mar 25 01:16:24.298078 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 25 01:16:24.304672 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 25 01:16:24.309910 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 25 01:16:24.315375 kernel: BTRFS info (device dm-0): using free space tree Mar 25 01:16:24.585359 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 25 01:16:24.590238 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 25 01:16:24.592569 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 25 01:16:24.601607 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 25 01:16:24.656673 kernel: BTRFS info (device sda6): first mount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:16:24.656718 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 25 01:16:24.656729 kernel: BTRFS info (device sda6): using free space tree Mar 25 01:16:24.693478 kernel: BTRFS info (device sda6): auto enabling async discard Mar 25 01:16:24.705500 kernel: BTRFS info (device sda6): last unmount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:16:24.713059 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 25 01:16:24.720614 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 25 01:16:24.743784 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:16:24.760066 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:16:24.805874 systemd-networkd[869]: lo: Link UP Mar 25 01:16:24.806483 systemd-networkd[869]: lo: Gained carrier Mar 25 01:16:24.808103 systemd-networkd[869]: Enumeration completed Mar 25 01:16:24.808315 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:16:24.810660 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:16:24.810664 systemd-networkd[869]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:16:24.818866 systemd[1]: Reached target network.target - Network. Mar 25 01:16:24.902463 kernel: mlx5_core e0a7:00:02.0 enP57511s1: Link up Mar 25 01:16:24.946461 kernel: hv_netvsc 000d3a07-a374-000d-3a07-a374000d3a07 eth0: Data path switched to VF: enP57511s1 Mar 25 01:16:24.947126 systemd-networkd[869]: enP57511s1: Link UP Mar 25 01:16:24.947352 systemd-networkd[869]: eth0: Link UP Mar 25 01:16:24.947763 systemd-networkd[869]: eth0: Gained carrier Mar 25 01:16:24.947772 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:16:24.971679 systemd-networkd[869]: enP57511s1: Gained carrier Mar 25 01:16:24.986485 systemd-networkd[869]: eth0: DHCPv4 address 10.200.20.47/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 25 01:16:25.503707 ignition[852]: Ignition 2.20.0 Mar 25 01:16:25.503720 ignition[852]: Stage: fetch-offline Mar 25 01:16:25.508774 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:16:25.503751 ignition[852]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:16:25.521581 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 25 01:16:25.503758 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:16:25.503855 ignition[852]: parsed url from cmdline: "" Mar 25 01:16:25.503858 ignition[852]: no config URL provided Mar 25 01:16:25.503863 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 01:16:25.503870 ignition[852]: no config at "/usr/lib/ignition/user.ign" Mar 25 01:16:25.503874 ignition[852]: failed to fetch config: resource requires networking Mar 25 01:16:25.504042 ignition[852]: Ignition finished successfully Mar 25 01:16:25.561916 ignition[880]: Ignition 2.20.0 Mar 25 01:16:25.561926 ignition[880]: Stage: fetch Mar 25 01:16:25.562108 ignition[880]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:16:25.562119 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:16:25.562222 ignition[880]: parsed url from cmdline: "" Mar 25 01:16:25.562225 ignition[880]: no config URL provided Mar 25 01:16:25.562229 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 01:16:25.562239 ignition[880]: no config at "/usr/lib/ignition/user.ign" Mar 25 01:16:25.562267 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 25 01:16:25.653542 ignition[880]: GET result: OK Mar 25 01:16:25.653612 ignition[880]: config has been read from IMDS userdata Mar 25 01:16:25.653651 ignition[880]: parsing config with SHA512: 8d392307f919f9bfc05a0619a3eb6edf482844b34ce4c0cd0f7f09437a744ae1c0036a77cfb4f7c173bc13e9a34d66a801bae9ade10c871d6828e06798f04c94 Mar 25 01:16:25.657852 unknown[880]: fetched base config from "system" Mar 25 01:16:25.658212 ignition[880]: fetch: fetch complete Mar 25 01:16:25.657858 unknown[880]: fetched base config from "system" Mar 25 01:16:25.658217 ignition[880]: fetch: fetch passed Mar 25 01:16:25.657863 unknown[880]: fetched user config from "azure" Mar 25 01:16:25.658256 ignition[880]: Ignition finished successfully Mar 25 01:16:25.663086 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 25 01:16:25.675578 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 25 01:16:25.718269 ignition[887]: Ignition 2.20.0 Mar 25 01:16:25.718277 ignition[887]: Stage: kargs Mar 25 01:16:25.718434 ignition[887]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:16:25.727158 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 25 01:16:25.721503 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:16:25.739584 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 25 01:16:25.722399 ignition[887]: kargs: kargs passed Mar 25 01:16:25.722472 ignition[887]: Ignition finished successfully Mar 25 01:16:25.772203 ignition[894]: Ignition 2.20.0 Mar 25 01:16:25.772214 ignition[894]: Stage: disks Mar 25 01:16:25.772366 ignition[894]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:16:25.777810 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 25 01:16:25.772375 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:16:25.784359 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 25 01:16:25.773215 ignition[894]: disks: disks passed Mar 25 01:16:25.795240 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 25 01:16:25.773254 ignition[894]: Ignition finished successfully Mar 25 01:16:25.807661 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:16:25.818915 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:16:25.830291 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:16:25.841582 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 25 01:16:25.959683 systemd-fsck[903]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 25 01:16:25.963781 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 25 01:16:25.977214 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 25 01:16:26.044460 kernel: EXT4-fs (sda9): mounted filesystem a7a89271-ee7d-4bda-a834-705261d6cda9 r/w with ordered data mode. Quota mode: none. Mar 25 01:16:26.044985 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 25 01:16:26.049828 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 25 01:16:26.090920 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:16:26.112147 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 25 01:16:26.122689 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 25 01:16:26.134540 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 25 01:16:26.134583 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:16:26.143961 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 25 01:16:26.179222 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (914) Mar 25 01:16:26.174922 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 25 01:16:26.207369 kernel: BTRFS info (device sda6): first mount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:16:26.207389 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 25 01:16:26.207399 kernel: BTRFS info (device sda6): using free space tree Mar 25 01:16:26.217463 kernel: BTRFS info (device sda6): auto enabling async discard Mar 25 01:16:26.219707 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:16:26.378604 systemd-networkd[869]: enP57511s1: Gained IPv6LL Mar 25 01:16:26.698563 systemd-networkd[869]: eth0: Gained IPv6LL Mar 25 01:16:26.703751 coreos-metadata[916]: Mar 25 01:16:26.702 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 25 01:16:26.711991 coreos-metadata[916]: Mar 25 01:16:26.711 INFO Fetch successful Mar 25 01:16:26.711991 coreos-metadata[916]: Mar 25 01:16:26.711 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 25 01:16:26.728954 coreos-metadata[916]: Mar 25 01:16:26.728 INFO Fetch successful Mar 25 01:16:26.742516 coreos-metadata[916]: Mar 25 01:16:26.742 INFO wrote hostname ci-4284.0.0-a-be6d65597e to /sysroot/etc/hostname Mar 25 01:16:26.752369 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 25 01:16:27.005869 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory Mar 25 01:16:27.028364 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Mar 25 01:16:27.066084 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Mar 25 01:16:27.075106 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Mar 25 01:16:27.992936 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 25 01:16:28.002556 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 25 01:16:28.013602 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 25 01:16:28.036134 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 25 01:16:28.049464 kernel: BTRFS info (device sda6): last unmount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:16:28.069636 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 25 01:16:28.087947 ignition[1035]: INFO : Ignition 2.20.0 Mar 25 01:16:28.092753 ignition[1035]: INFO : Stage: mount Mar 25 01:16:28.092753 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:16:28.092753 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:16:28.092753 ignition[1035]: INFO : mount: mount passed Mar 25 01:16:28.092753 ignition[1035]: INFO : Ignition finished successfully Mar 25 01:16:28.093193 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 25 01:16:28.106568 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 25 01:16:28.140710 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:16:28.166952 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (1046) Mar 25 01:16:28.173582 kernel: BTRFS info (device sda6): first mount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:16:28.179214 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 25 01:16:28.183330 kernel: BTRFS info (device sda6): using free space tree Mar 25 01:16:28.190479 kernel: BTRFS info (device sda6): auto enabling async discard Mar 25 01:16:28.191521 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:16:28.217541 ignition[1064]: INFO : Ignition 2.20.0 Mar 25 01:16:28.221482 ignition[1064]: INFO : Stage: files Mar 25 01:16:28.221482 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:16:28.221482 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:16:28.221482 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping Mar 25 01:16:28.271307 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 25 01:16:28.271307 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 25 01:16:28.332569 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 25 01:16:28.340028 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 25 01:16:28.340028 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 25 01:16:28.332990 unknown[1064]: wrote ssh authorized keys file for user: core Mar 25 01:16:28.358933 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 25 01:16:28.358933 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 25 01:16:28.430100 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 25 01:16:28.612507 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 25 01:16:28.612507 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 25 01:16:28.632860 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 25 01:16:29.080950 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 25 01:16:29.164406 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 25 01:16:29.164406 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 25 01:16:29.183733 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 25 01:16:29.183733 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:16:29.183733 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:16:29.183733 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:16:29.183733 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:16:29.183733 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:16:29.183733 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:16:29.183733 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:16:29.183733 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:16:29.183733 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 25 01:16:29.183733 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 25 01:16:29.183733 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 25 01:16:29.183733 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 25 01:16:29.573864 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 25 01:16:29.839994 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 25 01:16:29.839994 ignition[1064]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 25 01:16:29.920723 ignition[1064]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:16:29.931874 ignition[1064]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:16:29.931874 ignition[1064]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 25 01:16:29.931874 ignition[1064]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 25 01:16:29.931874 ignition[1064]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 25 01:16:29.931874 ignition[1064]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:16:29.931874 ignition[1064]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:16:29.931874 ignition[1064]: INFO : files: files passed Mar 25 01:16:29.931874 ignition[1064]: INFO : Ignition finished successfully Mar 25 01:16:29.932079 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 25 01:16:29.951601 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 25 01:16:29.963588 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 25 01:16:30.052549 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:16:30.052549 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:16:30.004042 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 25 01:16:30.088466 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:16:30.004131 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 25 01:16:30.020308 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:16:30.032029 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 25 01:16:30.046647 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 25 01:16:30.118274 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 25 01:16:30.118410 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 25 01:16:30.130372 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 25 01:16:30.140849 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 25 01:16:30.153749 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 25 01:16:30.154720 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 25 01:16:30.199579 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:16:30.209627 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 25 01:16:30.235894 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:16:30.242280 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:16:30.254818 systemd[1]: Stopped target timers.target - Timer Units. Mar 25 01:16:30.266032 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 25 01:16:30.266161 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:16:30.282700 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 25 01:16:30.288352 systemd[1]: Stopped target basic.target - Basic System. Mar 25 01:16:30.299825 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 25 01:16:30.311147 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:16:30.321764 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 25 01:16:30.333142 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 25 01:16:30.344503 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:16:30.356852 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 25 01:16:30.367291 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 25 01:16:30.379019 systemd[1]: Stopped target swap.target - Swaps. Mar 25 01:16:30.389193 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 25 01:16:30.389329 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:16:30.404715 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:16:30.411122 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:16:30.422237 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 25 01:16:30.422309 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:16:30.434069 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 25 01:16:30.434194 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 25 01:16:30.450857 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 25 01:16:30.450981 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:16:30.457666 systemd[1]: ignition-files.service: Deactivated successfully. Mar 25 01:16:30.457754 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 25 01:16:30.537698 ignition[1117]: INFO : Ignition 2.20.0 Mar 25 01:16:30.537698 ignition[1117]: INFO : Stage: umount Mar 25 01:16:30.537698 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:16:30.537698 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:16:30.537698 ignition[1117]: INFO : umount: umount passed Mar 25 01:16:30.537698 ignition[1117]: INFO : Ignition finished successfully Mar 25 01:16:30.467682 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 25 01:16:30.467771 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 25 01:16:30.482709 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 25 01:16:30.499278 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 25 01:16:30.499429 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:16:30.529155 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 25 01:16:30.541993 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 25 01:16:30.542155 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:16:30.552137 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 25 01:16:30.552230 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:16:30.568165 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 25 01:16:30.569998 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 25 01:16:30.587261 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 25 01:16:30.587377 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 25 01:16:30.597368 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 25 01:16:30.597430 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 25 01:16:30.608879 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 25 01:16:30.608926 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 25 01:16:30.621269 systemd[1]: Stopped target network.target - Network. Mar 25 01:16:30.631209 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 25 01:16:30.631270 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:16:30.642568 systemd[1]: Stopped target paths.target - Path Units. Mar 25 01:16:30.653067 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 25 01:16:30.656466 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:16:30.665168 systemd[1]: Stopped target slices.target - Slice Units. Mar 25 01:16:30.675486 systemd[1]: Stopped target sockets.target - Socket Units. Mar 25 01:16:30.686465 systemd[1]: iscsid.socket: Deactivated successfully. Mar 25 01:16:30.686508 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:16:30.697255 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 25 01:16:30.697287 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:16:30.707992 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 25 01:16:30.708045 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 25 01:16:30.718420 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 25 01:16:30.718484 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 25 01:16:30.728754 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 25 01:16:30.738497 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 25 01:16:30.751029 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 25 01:16:30.751695 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 25 01:16:30.751778 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 25 01:16:30.766663 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 25 01:16:30.766958 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 25 01:16:30.767056 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 25 01:16:31.005580 kernel: hv_netvsc 000d3a07-a374-000d-3a07-a374000d3a07 eth0: Data path switched from VF: enP57511s1 Mar 25 01:16:30.782338 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 25 01:16:30.782589 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 25 01:16:30.782675 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 25 01:16:30.792439 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 25 01:16:30.792589 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 25 01:16:30.804616 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 25 01:16:30.804685 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:16:30.815000 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 25 01:16:30.815064 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 25 01:16:30.827557 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 25 01:16:30.845342 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 25 01:16:30.845402 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:16:30.855920 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:16:30.855962 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:16:30.866119 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 25 01:16:30.866160 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 25 01:16:30.871901 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 25 01:16:30.871946 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:16:30.886898 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:16:30.900709 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 01:16:30.900768 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:16:30.927604 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 25 01:16:30.927755 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:16:30.939253 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 25 01:16:30.939299 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 25 01:16:30.950178 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 25 01:16:30.950218 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:16:30.960173 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 25 01:16:30.960222 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:16:31.231728 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 25 01:16:30.976065 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 25 01:16:30.976110 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 25 01:16:31.000359 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:16:31.000416 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:16:31.025595 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 25 01:16:31.044728 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 25 01:16:31.044792 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:16:31.056672 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:16:31.056745 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:16:31.072253 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 25 01:16:31.072315 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:16:31.072669 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 25 01:16:31.072764 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 25 01:16:31.101957 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 25 01:16:31.102080 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 25 01:16:31.114167 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 25 01:16:31.126587 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 25 01:16:31.162224 systemd[1]: Switching root. Mar 25 01:16:31.336024 systemd-journald[217]: Journal stopped Mar 25 01:16:37.242219 kernel: SELinux: policy capability network_peer_controls=1 Mar 25 01:16:37.242240 kernel: SELinux: policy capability open_perms=1 Mar 25 01:16:37.242250 kernel: SELinux: policy capability extended_socket_class=1 Mar 25 01:16:37.242258 kernel: SELinux: policy capability always_check_network=0 Mar 25 01:16:37.242266 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 25 01:16:37.242274 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 25 01:16:37.242282 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 25 01:16:37.242290 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 25 01:16:37.242299 systemd[1]: Successfully loaded SELinux policy in 160.452ms. Mar 25 01:16:37.242308 kernel: audit: type=1403 audit(1742865392.693:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 25 01:16:37.242318 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.675ms. Mar 25 01:16:37.242328 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:16:37.242336 systemd[1]: Detected virtualization microsoft. Mar 25 01:16:37.242344 systemd[1]: Detected architecture arm64. Mar 25 01:16:37.242355 systemd[1]: Detected first boot. Mar 25 01:16:37.242365 systemd[1]: Hostname set to . Mar 25 01:16:37.242374 systemd[1]: Initializing machine ID from random generator. Mar 25 01:16:37.242383 zram_generator::config[1161]: No configuration found. Mar 25 01:16:37.242392 kernel: NET: Registered PF_VSOCK protocol family Mar 25 01:16:37.242400 systemd[1]: Populated /etc with preset unit settings. Mar 25 01:16:37.242409 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 25 01:16:37.242418 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 25 01:16:37.242428 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 25 01:16:37.242436 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 25 01:16:37.242457 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 25 01:16:37.242468 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 25 01:16:37.242477 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 25 01:16:37.242486 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 25 01:16:37.242495 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 25 01:16:37.242506 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 25 01:16:37.242515 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 25 01:16:37.242524 systemd[1]: Created slice user.slice - User and Session Slice. Mar 25 01:16:37.242532 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:16:37.242541 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:16:37.242550 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 25 01:16:37.242560 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 25 01:16:37.242569 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 25 01:16:37.242579 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:16:37.242588 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 25 01:16:37.242597 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:16:37.242608 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 25 01:16:37.242617 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 25 01:16:37.242626 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 25 01:16:37.242635 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 25 01:16:37.242644 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:16:37.242654 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:16:37.242663 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:16:37.242672 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:16:37.242681 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 25 01:16:37.242690 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 25 01:16:37.242699 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 25 01:16:37.242710 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:16:37.242719 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:16:37.242728 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:16:37.242737 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 25 01:16:37.242746 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 25 01:16:37.242756 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 25 01:16:37.242765 systemd[1]: Mounting media.mount - External Media Directory... Mar 25 01:16:37.242776 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 25 01:16:37.242785 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 25 01:16:37.242794 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 25 01:16:37.242804 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 25 01:16:37.242813 systemd[1]: Reached target machines.target - Containers. Mar 25 01:16:37.242822 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 25 01:16:37.242831 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:16:37.242841 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:16:37.242852 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 25 01:16:37.242861 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:16:37.242870 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:16:37.242879 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:16:37.242888 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 25 01:16:37.242897 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:16:37.242906 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 25 01:16:37.242916 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 25 01:16:37.242926 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 25 01:16:37.242935 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 25 01:16:37.242944 systemd[1]: Stopped systemd-fsck-usr.service. Mar 25 01:16:37.242955 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:16:37.242964 kernel: fuse: init (API version 7.39) Mar 25 01:16:37.242972 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:16:37.242981 kernel: loop: module loaded Mar 25 01:16:37.242989 kernel: ACPI: bus type drm_connector registered Mar 25 01:16:37.242997 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:16:37.243008 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 25 01:16:37.243033 systemd-journald[1255]: Collecting audit messages is disabled. Mar 25 01:16:37.243053 systemd-journald[1255]: Journal started Mar 25 01:16:37.243074 systemd-journald[1255]: Runtime Journal (/run/log/journal/ef671d2c78094726b4c5219ddbac943e) is 8M, max 78.5M, 70.5M free. Mar 25 01:16:36.328056 systemd[1]: Queued start job for default target multi-user.target. Mar 25 01:16:36.339180 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 25 01:16:36.339564 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 25 01:16:36.339868 systemd[1]: systemd-journald.service: Consumed 3.201s CPU time. Mar 25 01:16:37.267271 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 25 01:16:37.293562 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 25 01:16:37.311416 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:16:37.320540 systemd[1]: verity-setup.service: Deactivated successfully. Mar 25 01:16:37.320586 systemd[1]: Stopped verity-setup.service. Mar 25 01:16:37.338295 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:16:37.339108 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 25 01:16:37.344974 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 25 01:16:37.351256 systemd[1]: Mounted media.mount - External Media Directory. Mar 25 01:16:37.356647 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 25 01:16:37.362654 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 25 01:16:37.368916 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 25 01:16:37.374432 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 25 01:16:37.381854 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:16:37.389397 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 25 01:16:37.389678 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 25 01:16:37.396787 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:16:37.396946 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:16:37.403314 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:16:37.403578 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:16:37.409812 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:16:37.410006 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:16:37.417135 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 25 01:16:37.417281 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 25 01:16:37.423806 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:16:37.423992 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:16:37.430337 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:16:37.436762 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 25 01:16:37.445475 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 25 01:16:37.454467 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 25 01:16:37.463591 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:16:37.481496 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 25 01:16:37.488948 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 25 01:16:37.504548 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 25 01:16:37.510797 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 25 01:16:37.510838 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:16:37.517501 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 25 01:16:37.526623 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 25 01:16:37.539398 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 25 01:16:37.545178 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:16:37.546284 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 25 01:16:37.553611 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 25 01:16:37.561348 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:16:37.563613 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 25 01:16:37.569977 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:16:37.583577 systemd-journald[1255]: Time spent on flushing to /var/log/journal/ef671d2c78094726b4c5219ddbac943e is 33.256ms for 911 entries. Mar 25 01:16:37.583577 systemd-journald[1255]: System Journal (/var/log/journal/ef671d2c78094726b4c5219ddbac943e) is 8M, max 2.6G, 2.6G free. Mar 25 01:16:37.669016 systemd-journald[1255]: Received client request to flush runtime journal. Mar 25 01:16:37.669082 kernel: loop0: detected capacity change from 0 to 194096 Mar 25 01:16:37.581925 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:16:37.596586 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 25 01:16:37.610598 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 25 01:16:37.620989 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 25 01:16:37.637708 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 25 01:16:37.650016 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 25 01:16:37.657905 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 25 01:16:37.666166 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 25 01:16:37.674011 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 25 01:16:37.686669 udevadm[1304]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 25 01:16:37.687612 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 25 01:16:37.690604 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 25 01:16:37.701003 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 25 01:16:37.709725 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:16:37.740683 kernel: loop1: detected capacity change from 0 to 28888 Mar 25 01:16:37.776821 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 25 01:16:37.777543 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 25 01:16:37.945270 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 25 01:16:37.953057 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:16:38.037912 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Mar 25 01:16:38.037936 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Mar 25 01:16:38.042854 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:16:38.145488 kernel: loop2: detected capacity change from 0 to 126448 Mar 25 01:16:38.499490 kernel: loop3: detected capacity change from 0 to 103832 Mar 25 01:16:38.676957 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 25 01:16:38.685420 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:16:38.723055 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Mar 25 01:16:38.828472 kernel: loop4: detected capacity change from 0 to 194096 Mar 25 01:16:38.841489 kernel: loop5: detected capacity change from 0 to 28888 Mar 25 01:16:38.851537 kernel: loop6: detected capacity change from 0 to 126448 Mar 25 01:16:38.864479 kernel: loop7: detected capacity change from 0 to 103832 Mar 25 01:16:38.870479 (sd-merge)[1328]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 25 01:16:38.870919 (sd-merge)[1328]: Merged extensions into '/usr'. Mar 25 01:16:38.874640 systemd[1]: Reload requested from client PID 1301 ('systemd-sysext') (unit systemd-sysext.service)... Mar 25 01:16:38.874661 systemd[1]: Reloading... Mar 25 01:16:38.937508 zram_generator::config[1357]: No configuration found. Mar 25 01:16:39.135782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:16:39.141629 kernel: mousedev: PS/2 mouse device common for all mice Mar 25 01:16:39.234623 kernel: hv_vmbus: registering driver hv_balloon Mar 25 01:16:39.235044 kernel: hv_vmbus: registering driver hyperv_fb Mar 25 01:16:39.240488 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 25 01:16:39.258722 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 25 01:16:39.258798 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 25 01:16:39.258815 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 25 01:16:39.253421 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 25 01:16:39.253756 systemd[1]: Reloading finished in 378 ms. Mar 25 01:16:39.265927 kernel: Console: switching to colour dummy device 80x25 Mar 25 01:16:39.274640 kernel: Console: switching to colour frame buffer device 128x48 Mar 25 01:16:39.279894 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:16:39.290206 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 25 01:16:39.325518 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1419) Mar 25 01:16:39.341015 systemd[1]: Starting ensure-sysext.service... Mar 25 01:16:39.350937 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:16:39.370706 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:16:39.386855 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:16:39.407104 systemd-tmpfiles[1503]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 25 01:16:39.407310 systemd-tmpfiles[1503]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 25 01:16:39.407966 systemd-tmpfiles[1503]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 25 01:16:39.408165 systemd-tmpfiles[1503]: ACLs are not supported, ignoring. Mar 25 01:16:39.408226 systemd-tmpfiles[1503]: ACLs are not supported, ignoring. Mar 25 01:16:39.417191 systemd-tmpfiles[1503]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:16:39.417207 systemd-tmpfiles[1503]: Skipping /boot Mar 25 01:16:39.426881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 25 01:16:39.428349 systemd-tmpfiles[1503]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:16:39.428354 systemd-tmpfiles[1503]: Skipping /boot Mar 25 01:16:39.450405 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:16:39.468652 systemd[1]: Reload requested from client PID 1483 ('systemctl') (unit ensure-sysext.service)... Mar 25 01:16:39.468767 systemd[1]: Reloading... Mar 25 01:16:39.540472 zram_generator::config[1546]: No configuration found. Mar 25 01:16:39.648674 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:16:39.764942 systemd[1]: Reloading finished in 295 ms. Mar 25 01:16:39.815658 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 25 01:16:39.829277 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:16:39.845689 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 25 01:16:39.854101 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:16:39.855298 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 25 01:16:39.864823 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:16:39.880675 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:16:39.889663 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:16:39.899935 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:16:39.907685 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 25 01:16:39.914416 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:16:39.917779 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 25 01:16:39.927718 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:16:39.936678 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 25 01:16:39.950348 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 25 01:16:39.957465 lvm[1608]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:16:39.957781 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:16:39.957956 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:16:39.969208 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:16:39.980701 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:16:39.994303 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:16:40.001640 augenrules[1638]: No rules Mar 25 01:16:39.997155 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 25 01:16:40.006472 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:16:40.008483 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:16:40.017876 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:16:40.020630 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:16:40.029092 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:16:40.029254 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:16:40.036132 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:16:40.036303 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:16:40.042228 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 25 01:16:40.053036 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 25 01:16:40.062433 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:16:40.078516 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 25 01:16:40.098890 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 25 01:16:40.114016 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:16:40.120252 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:16:40.123666 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 25 01:16:40.140781 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:16:40.152025 lvm[1662]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:16:40.153706 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:16:40.171061 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:16:40.180064 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:16:40.180217 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:16:40.184249 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:16:40.184438 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:16:40.191624 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 25 01:16:40.198994 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:16:40.199746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:16:40.207822 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:16:40.207972 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:16:40.223718 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:16:40.230173 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:16:40.238593 systemd-resolved[1629]: Positive Trust Anchors: Mar 25 01:16:40.238602 systemd-resolved[1629]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:16:40.238648 systemd-resolved[1629]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:16:40.239266 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:16:40.251535 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:16:40.253829 systemd-resolved[1629]: Using system hostname 'ci-4284.0.0-a-be6d65597e'. Mar 25 01:16:40.260754 augenrules[1671]: /sbin/augenrules: No change Mar 25 01:16:40.261761 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:16:40.273995 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:16:40.276386 systemd-networkd[1487]: lo: Link UP Mar 25 01:16:40.279073 systemd-networkd[1487]: lo: Gained carrier Mar 25 01:16:40.281247 systemd-networkd[1487]: Enumeration completed Mar 25 01:16:40.282068 augenrules[1692]: No rules Mar 25 01:16:40.282747 systemd-networkd[1487]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:16:40.282753 systemd-networkd[1487]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:16:40.285546 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:16:40.285714 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:16:40.285889 systemd[1]: Reached target time-set.target - System Time Set. Mar 25 01:16:40.294361 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:16:40.301107 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:16:40.308033 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:16:40.309480 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:16:40.316058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:16:40.316242 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:16:40.323236 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:16:40.323376 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:16:40.329889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:16:40.330017 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:16:40.337214 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:16:40.337343 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:16:40.346139 systemd[1]: Reached target network.target - Network. Mar 25 01:16:40.352692 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:16:40.364663 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 25 01:16:40.374645 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 25 01:16:40.383241 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:16:40.383381 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:16:40.420329 systemd[1]: Finished ensure-sysext.service. Mar 25 01:16:40.428668 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 25 01:16:40.627497 kernel: mlx5_core e0a7:00:02.0 enP57511s1: Link up Mar 25 01:16:40.628626 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 25 01:16:40.707806 kernel: hv_netvsc 000d3a07-a374-000d-3a07-a374000d3a07 eth0: Data path switched to VF: enP57511s1 Mar 25 01:16:40.708717 systemd-networkd[1487]: enP57511s1: Link UP Mar 25 01:16:40.708812 systemd-networkd[1487]: eth0: Link UP Mar 25 01:16:40.708815 systemd-networkd[1487]: eth0: Gained carrier Mar 25 01:16:40.708830 systemd-networkd[1487]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:16:40.710487 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 25 01:16:40.718790 systemd-networkd[1487]: enP57511s1: Gained carrier Mar 25 01:16:40.728507 systemd-networkd[1487]: eth0: DHCPv4 address 10.200.20.47/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 25 01:16:42.058624 systemd-networkd[1487]: enP57511s1: Gained IPv6LL Mar 25 01:16:42.442556 systemd-networkd[1487]: eth0: Gained IPv6LL Mar 25 01:16:42.448002 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 25 01:16:42.455585 systemd[1]: Reached target network-online.target - Network is Online. Mar 25 01:16:43.004507 ldconfig[1296]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 25 01:16:43.022580 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 25 01:16:43.031889 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 25 01:16:43.049647 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 25 01:16:43.055966 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:16:43.062297 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 25 01:16:43.068901 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 25 01:16:43.075702 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 25 01:16:43.081203 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 25 01:16:43.087956 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 25 01:16:43.094740 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 25 01:16:43.094774 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:16:43.099690 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:16:43.105083 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 25 01:16:43.112009 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 25 01:16:43.119425 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 25 01:16:43.126283 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 25 01:16:43.133275 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 25 01:16:43.141769 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 25 01:16:43.147783 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 25 01:16:43.154806 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 25 01:16:43.160394 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:16:43.165357 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:16:43.170384 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:16:43.170416 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:16:43.172940 systemd[1]: Starting chronyd.service - NTP client/server... Mar 25 01:16:43.186548 systemd[1]: Starting containerd.service - containerd container runtime... Mar 25 01:16:43.194634 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 25 01:16:43.208503 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 25 01:16:43.216426 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 25 01:16:43.229526 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 25 01:16:43.237187 jq[1720]: false Mar 25 01:16:43.237858 (chronyd)[1713]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 25 01:16:43.237921 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 25 01:16:43.237958 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 25 01:16:43.239711 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 25 01:16:43.246743 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 25 01:16:43.248659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:16:43.257209 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 25 01:16:43.266597 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 25 01:16:43.280667 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 25 01:16:43.291414 KVP[1722]: KVP starting; pid is:1722 Mar 25 01:16:43.306015 kernel: hv_utils: KVP IC version 4.0 Mar 25 01:16:43.291539 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 25 01:16:43.302914 chronyd[1737]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 25 01:16:43.306299 extend-filesystems[1721]: Found loop4 Mar 25 01:16:43.306299 extend-filesystems[1721]: Found loop5 Mar 25 01:16:43.306299 extend-filesystems[1721]: Found loop6 Mar 25 01:16:43.306299 extend-filesystems[1721]: Found loop7 Mar 25 01:16:43.306299 extend-filesystems[1721]: Found sda Mar 25 01:16:43.306299 extend-filesystems[1721]: Found sda1 Mar 25 01:16:43.306299 extend-filesystems[1721]: Found sda2 Mar 25 01:16:43.306299 extend-filesystems[1721]: Found sda3 Mar 25 01:16:43.306299 extend-filesystems[1721]: Found usr Mar 25 01:16:43.306299 extend-filesystems[1721]: Found sda4 Mar 25 01:16:43.306299 extend-filesystems[1721]: Found sda6 Mar 25 01:16:43.306299 extend-filesystems[1721]: Found sda7 Mar 25 01:16:43.306299 extend-filesystems[1721]: Found sda9 Mar 25 01:16:43.306299 extend-filesystems[1721]: Checking size of /dev/sda9 Mar 25 01:16:43.304074 KVP[1722]: KVP LIC Version: 3.1 Mar 25 01:16:43.311631 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 25 01:16:43.440115 extend-filesystems[1721]: Old size kept for /dev/sda9 Mar 25 01:16:43.440115 extend-filesystems[1721]: Found sr0 Mar 25 01:16:43.313871 chronyd[1737]: Timezone right/UTC failed leap second check, ignoring Mar 25 01:16:43.330893 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 25 01:16:43.318888 chronyd[1737]: Loaded seccomp filter (level 2) Mar 25 01:16:43.343087 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 25 01:16:43.432048 dbus-daemon[1716]: [system] SELinux support is enabled Mar 25 01:16:43.343671 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 25 01:16:43.344297 systemd[1]: Starting update-engine.service - Update Engine... Mar 25 01:16:43.470919 update_engine[1744]: I20250325 01:16:43.441887 1744 main.cc:92] Flatcar Update Engine starting Mar 25 01:16:43.357862 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 25 01:16:43.471146 jq[1746]: true Mar 25 01:16:43.372290 systemd[1]: Started chronyd.service - NTP client/server. Mar 25 01:16:43.386489 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 25 01:16:43.386708 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 25 01:16:43.480319 update_engine[1744]: I20250325 01:16:43.471711 1744 update_check_scheduler.cc:74] Next update check in 8m44s Mar 25 01:16:43.389848 systemd[1]: motdgen.service: Deactivated successfully. Mar 25 01:16:43.390028 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 25 01:16:43.450975 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 25 01:16:43.461951 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 25 01:16:43.463492 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 25 01:16:43.474902 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 25 01:16:43.484759 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 25 01:16:43.485262 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 25 01:16:43.510719 (ntainerd)[1763]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 25 01:16:43.521296 jq[1762]: true Mar 25 01:16:43.526591 systemd[1]: Started update-engine.service - Update Engine. Mar 25 01:16:43.536515 systemd-logind[1741]: New seat seat0. Mar 25 01:16:43.543026 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 25 01:16:43.543508 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 25 01:16:43.551318 systemd-logind[1741]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 25 01:16:43.556286 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 25 01:16:43.556427 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 25 01:16:43.578788 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 25 01:16:43.585555 systemd[1]: Started systemd-logind.service - User Login Management. Mar 25 01:16:43.595575 coreos-metadata[1715]: Mar 25 01:16:43.594 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 25 01:16:43.595843 tar[1755]: linux-arm64/helm Mar 25 01:16:43.604574 coreos-metadata[1715]: Mar 25 01:16:43.603 INFO Fetch successful Mar 25 01:16:43.604574 coreos-metadata[1715]: Mar 25 01:16:43.603 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 25 01:16:43.609385 coreos-metadata[1715]: Mar 25 01:16:43.608 INFO Fetch successful Mar 25 01:16:43.609385 coreos-metadata[1715]: Mar 25 01:16:43.609 INFO Fetching http://168.63.129.16/machine/09b1d2da-57fb-4dce-a222-f6849d558ebd/efd90b15%2D4c64%2D44fd%2D9814%2D970c22ca3bf4.%5Fci%2D4284.0.0%2Da%2Dbe6d65597e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 25 01:16:43.611058 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1784) Mar 25 01:16:43.611500 coreos-metadata[1715]: Mar 25 01:16:43.611 INFO Fetch successful Mar 25 01:16:43.611571 coreos-metadata[1715]: Mar 25 01:16:43.611 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 25 01:16:43.626756 coreos-metadata[1715]: Mar 25 01:16:43.626 INFO Fetch successful Mar 25 01:16:43.683903 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 25 01:16:43.703854 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 25 01:16:43.727970 bash[1813]: Updated "/home/core/.ssh/authorized_keys" Mar 25 01:16:43.729207 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 25 01:16:43.740716 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 25 01:16:43.854555 locksmithd[1794]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 25 01:16:44.268168 containerd[1763]: time="2025-03-25T01:16:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 25 01:16:44.268947 containerd[1763]: time="2025-03-25T01:16:44.268921960Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 25 01:16:44.279941 containerd[1763]: time="2025-03-25T01:16:44.279354160Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.32µs" Mar 25 01:16:44.279941 containerd[1763]: time="2025-03-25T01:16:44.279630680Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 25 01:16:44.279941 containerd[1763]: time="2025-03-25T01:16:44.279671960Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 25 01:16:44.279941 containerd[1763]: time="2025-03-25T01:16:44.279829000Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 25 01:16:44.279941 containerd[1763]: time="2025-03-25T01:16:44.279845640Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 25 01:16:44.279941 containerd[1763]: time="2025-03-25T01:16:44.279872240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:16:44.279941 containerd[1763]: time="2025-03-25T01:16:44.279928640Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:16:44.279941 containerd[1763]: time="2025-03-25T01:16:44.279941080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:16:44.280224 containerd[1763]: time="2025-03-25T01:16:44.280191360Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:16:44.280224 containerd[1763]: time="2025-03-25T01:16:44.280215000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:16:44.280275 containerd[1763]: time="2025-03-25T01:16:44.280226680Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:16:44.280275 containerd[1763]: time="2025-03-25T01:16:44.280235280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 25 01:16:44.280336 containerd[1763]: time="2025-03-25T01:16:44.280314760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 25 01:16:44.283589 containerd[1763]: time="2025-03-25T01:16:44.283559280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:16:44.283637 containerd[1763]: time="2025-03-25T01:16:44.283604120Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:16:44.283637 containerd[1763]: time="2025-03-25T01:16:44.283616040Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 25 01:16:44.283693 containerd[1763]: time="2025-03-25T01:16:44.283648160Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 25 01:16:44.283896 containerd[1763]: time="2025-03-25T01:16:44.283875480Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 25 01:16:44.283969 containerd[1763]: time="2025-03-25T01:16:44.283948600Z" level=info msg="metadata content store policy set" policy=shared Mar 25 01:16:44.312472 containerd[1763]: time="2025-03-25T01:16:44.310753800Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 25 01:16:44.312472 containerd[1763]: time="2025-03-25T01:16:44.310830040Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 25 01:16:44.312472 containerd[1763]: time="2025-03-25T01:16:44.310846840Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 25 01:16:44.312472 containerd[1763]: time="2025-03-25T01:16:44.310864040Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 25 01:16:44.312472 containerd[1763]: time="2025-03-25T01:16:44.310876160Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 25 01:16:44.312472 containerd[1763]: time="2025-03-25T01:16:44.310887360Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 25 01:16:44.312472 containerd[1763]: time="2025-03-25T01:16:44.310900200Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 25 01:16:44.312472 containerd[1763]: time="2025-03-25T01:16:44.310920960Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 25 01:16:44.312472 containerd[1763]: time="2025-03-25T01:16:44.310932040Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 25 01:16:44.312472 containerd[1763]: time="2025-03-25T01:16:44.310942520Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 25 01:16:44.312472 containerd[1763]: time="2025-03-25T01:16:44.310951720Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 25 01:16:44.312472 containerd[1763]: time="2025-03-25T01:16:44.310963280Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 25 01:16:44.312472 containerd[1763]: time="2025-03-25T01:16:44.311142640Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 25 01:16:44.312472 containerd[1763]: time="2025-03-25T01:16:44.311165160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 25 01:16:44.312782 containerd[1763]: time="2025-03-25T01:16:44.311178240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 25 01:16:44.312782 containerd[1763]: time="2025-03-25T01:16:44.311193080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 25 01:16:44.312782 containerd[1763]: time="2025-03-25T01:16:44.311204920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 25 01:16:44.312782 containerd[1763]: time="2025-03-25T01:16:44.311217240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 25 01:16:44.312782 containerd[1763]: time="2025-03-25T01:16:44.311227960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 25 01:16:44.312782 containerd[1763]: time="2025-03-25T01:16:44.311238320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 25 01:16:44.312782 containerd[1763]: time="2025-03-25T01:16:44.311266880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 25 01:16:44.312782 containerd[1763]: time="2025-03-25T01:16:44.311278560Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 25 01:16:44.312782 containerd[1763]: time="2025-03-25T01:16:44.311288760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 25 01:16:44.312782 containerd[1763]: time="2025-03-25T01:16:44.311364040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 25 01:16:44.312782 containerd[1763]: time="2025-03-25T01:16:44.311380280Z" level=info msg="Start snapshots syncer" Mar 25 01:16:44.312782 containerd[1763]: time="2025-03-25T01:16:44.311403080Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 25 01:16:44.312976 containerd[1763]: time="2025-03-25T01:16:44.311668600Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 25 01:16:44.312976 containerd[1763]: time="2025-03-25T01:16:44.311717720Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 25 01:16:44.313071 containerd[1763]: time="2025-03-25T01:16:44.311791600Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 25 01:16:44.313071 containerd[1763]: time="2025-03-25T01:16:44.311893920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 25 01:16:44.313071 containerd[1763]: time="2025-03-25T01:16:44.311915320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 25 01:16:44.313071 containerd[1763]: time="2025-03-25T01:16:44.311927960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 25 01:16:44.313071 containerd[1763]: time="2025-03-25T01:16:44.311939760Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 25 01:16:44.313071 containerd[1763]: time="2025-03-25T01:16:44.311952240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 25 01:16:44.313071 containerd[1763]: time="2025-03-25T01:16:44.311962360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 25 01:16:44.313071 containerd[1763]: time="2025-03-25T01:16:44.311973280Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 25 01:16:44.313071 containerd[1763]: time="2025-03-25T01:16:44.311997000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 25 01:16:44.313071 containerd[1763]: time="2025-03-25T01:16:44.312009320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 25 01:16:44.313071 containerd[1763]: time="2025-03-25T01:16:44.312018760Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 25 01:16:44.313071 containerd[1763]: time="2025-03-25T01:16:44.312055440Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:16:44.313071 containerd[1763]: time="2025-03-25T01:16:44.312069720Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:16:44.313071 containerd[1763]: time="2025-03-25T01:16:44.312078560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:16:44.313295 containerd[1763]: time="2025-03-25T01:16:44.312087240Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:16:44.313295 containerd[1763]: time="2025-03-25T01:16:44.312094840Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 25 01:16:44.313295 containerd[1763]: time="2025-03-25T01:16:44.312103760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 25 01:16:44.313295 containerd[1763]: time="2025-03-25T01:16:44.312114160Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 25 01:16:44.313295 containerd[1763]: time="2025-03-25T01:16:44.312129640Z" level=info msg="runtime interface created" Mar 25 01:16:44.313295 containerd[1763]: time="2025-03-25T01:16:44.312134200Z" level=info msg="created NRI interface" Mar 25 01:16:44.313295 containerd[1763]: time="2025-03-25T01:16:44.312143160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 25 01:16:44.313295 containerd[1763]: time="2025-03-25T01:16:44.312155520Z" level=info msg="Connect containerd service" Mar 25 01:16:44.313295 containerd[1763]: time="2025-03-25T01:16:44.312180920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 25 01:16:44.314050 containerd[1763]: time="2025-03-25T01:16:44.314023400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:16:44.375860 tar[1755]: linux-arm64/LICENSE Mar 25 01:16:44.376201 tar[1755]: linux-arm64/README.md Mar 25 01:16:44.392492 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 25 01:16:44.539394 sshd_keygen[1747]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 25 01:16:44.556881 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 25 01:16:44.565401 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 25 01:16:44.667425 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 25 01:16:44.681039 systemd[1]: issuegen.service: Deactivated successfully. Mar 25 01:16:44.681247 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 25 01:16:44.691816 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 25 01:16:44.712440 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 25 01:16:44.732476 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 25 01:16:44.744731 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 25 01:16:44.755723 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 25 01:16:44.765440 systemd[1]: Reached target getty.target - Login Prompts. Mar 25 01:16:44.966611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:16:45.025008 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:16:45.081280 containerd[1763]: time="2025-03-25T01:16:45.081204800Z" level=info msg="Start subscribing containerd event" Mar 25 01:16:45.082213 containerd[1763]: time="2025-03-25T01:16:45.081315680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 25 01:16:45.082213 containerd[1763]: time="2025-03-25T01:16:45.081930080Z" level=info msg="Start recovering state" Mar 25 01:16:45.082213 containerd[1763]: time="2025-03-25T01:16:45.081976280Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 25 01:16:45.082213 containerd[1763]: time="2025-03-25T01:16:45.082073200Z" level=info msg="Start event monitor" Mar 25 01:16:45.082213 containerd[1763]: time="2025-03-25T01:16:45.082092280Z" level=info msg="Start cni network conf syncer for default" Mar 25 01:16:45.082213 containerd[1763]: time="2025-03-25T01:16:45.082101120Z" level=info msg="Start streaming server" Mar 25 01:16:45.082213 containerd[1763]: time="2025-03-25T01:16:45.082109960Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 25 01:16:45.082213 containerd[1763]: time="2025-03-25T01:16:45.082116760Z" level=info msg="runtime interface starting up..." Mar 25 01:16:45.082213 containerd[1763]: time="2025-03-25T01:16:45.082133040Z" level=info msg="starting plugins..." Mar 25 01:16:45.082213 containerd[1763]: time="2025-03-25T01:16:45.082146960Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 25 01:16:45.088671 containerd[1763]: time="2025-03-25T01:16:45.082565360Z" level=info msg="containerd successfully booted in 0.814728s" Mar 25 01:16:45.082697 systemd[1]: Started containerd.service - containerd container runtime. Mar 25 01:16:45.090308 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 25 01:16:45.098587 systemd[1]: Startup finished in 636ms (kernel) + 12.653s (initrd) + 12.565s (userspace) = 25.855s. Mar 25 01:16:45.353107 login[1905]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:16:45.355321 login[1906]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:16:45.365182 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 25 01:16:45.367833 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 25 01:16:45.371520 systemd-logind[1741]: New session 1 of user core. Mar 25 01:16:45.378608 systemd-logind[1741]: New session 2 of user core. Mar 25 01:16:45.385421 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 25 01:16:45.389632 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 25 01:16:45.412900 (systemd)[1932]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 25 01:16:45.416055 systemd-logind[1741]: New session c1 of user core. Mar 25 01:16:45.549082 kubelet[1913]: E0325 01:16:45.548944 1913 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:16:45.555435 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:16:45.555592 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:16:45.555876 systemd[1]: kubelet.service: Consumed 707ms CPU time, 243.2M memory peak. Mar 25 01:16:45.591906 systemd[1932]: Queued start job for default target default.target. Mar 25 01:16:45.602757 systemd[1932]: Created slice app.slice - User Application Slice. Mar 25 01:16:45.602787 systemd[1932]: Reached target paths.target - Paths. Mar 25 01:16:45.602825 systemd[1932]: Reached target timers.target - Timers. Mar 25 01:16:45.603963 systemd[1932]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 25 01:16:45.615044 systemd[1932]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 25 01:16:45.615135 systemd[1932]: Reached target sockets.target - Sockets. Mar 25 01:16:45.615167 systemd[1932]: Reached target basic.target - Basic System. Mar 25 01:16:45.615194 systemd[1932]: Reached target default.target - Main User Target. Mar 25 01:16:45.615216 systemd[1932]: Startup finished in 191ms. Mar 25 01:16:45.615625 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 25 01:16:45.626586 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 25 01:16:45.627301 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 25 01:16:46.296535 waagent[1903]: 2025-03-25T01:16:46.296442Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Mar 25 01:16:46.302198 waagent[1903]: 2025-03-25T01:16:46.302147Z INFO Daemon Daemon OS: flatcar 4284.0.0 Mar 25 01:16:46.306942 waagent[1903]: 2025-03-25T01:16:46.306905Z INFO Daemon Daemon Python: 3.11.11 Mar 25 01:16:46.311736 waagent[1903]: 2025-03-25T01:16:46.311693Z INFO Daemon Daemon Run daemon Mar 25 01:16:46.316094 waagent[1903]: 2025-03-25T01:16:46.316023Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4284.0.0' Mar 25 01:16:46.325432 waagent[1903]: 2025-03-25T01:16:46.325388Z INFO Daemon Daemon Using waagent for provisioning Mar 25 01:16:46.330677 waagent[1903]: 2025-03-25T01:16:46.330639Z INFO Daemon Daemon Activate resource disk Mar 25 01:16:46.335225 waagent[1903]: 2025-03-25T01:16:46.335192Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 25 01:16:46.346181 waagent[1903]: 2025-03-25T01:16:46.346140Z INFO Daemon Daemon Found device: None Mar 25 01:16:46.350336 waagent[1903]: 2025-03-25T01:16:46.350301Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 25 01:16:46.358343 waagent[1903]: 2025-03-25T01:16:46.358311Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 25 01:16:46.369154 waagent[1903]: 2025-03-25T01:16:46.369114Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 25 01:16:46.374680 waagent[1903]: 2025-03-25T01:16:46.374647Z INFO Daemon Daemon Running default provisioning handler Mar 25 01:16:46.385978 waagent[1903]: 2025-03-25T01:16:46.385336Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 25 01:16:46.398833 waagent[1903]: 2025-03-25T01:16:46.398779Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 25 01:16:46.408132 waagent[1903]: 2025-03-25T01:16:46.408090Z INFO Daemon Daemon cloud-init is enabled: False Mar 25 01:16:46.413211 waagent[1903]: 2025-03-25T01:16:46.413175Z INFO Daemon Daemon Copying ovf-env.xml Mar 25 01:16:46.469918 waagent[1903]: 2025-03-25T01:16:46.469837Z INFO Daemon Daemon Successfully mounted dvd Mar 25 01:16:46.485440 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 25 01:16:46.487289 waagent[1903]: 2025-03-25T01:16:46.487222Z INFO Daemon Daemon Detect protocol endpoint Mar 25 01:16:46.492165 waagent[1903]: 2025-03-25T01:16:46.492119Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 25 01:16:46.498029 waagent[1903]: 2025-03-25T01:16:46.497989Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 25 01:16:46.504313 waagent[1903]: 2025-03-25T01:16:46.504272Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 25 01:16:46.509394 waagent[1903]: 2025-03-25T01:16:46.509355Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 25 01:16:46.514256 waagent[1903]: 2025-03-25T01:16:46.514220Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 25 01:16:46.563301 waagent[1903]: 2025-03-25T01:16:46.563200Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 25 01:16:46.569440 waagent[1903]: 2025-03-25T01:16:46.569414Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 25 01:16:46.574364 waagent[1903]: 2025-03-25T01:16:46.574324Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 25 01:16:46.741484 waagent[1903]: 2025-03-25T01:16:46.741388Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 25 01:16:46.748328 waagent[1903]: 2025-03-25T01:16:46.748274Z INFO Daemon Daemon Forcing an update of the goal state. Mar 25 01:16:46.758338 waagent[1903]: 2025-03-25T01:16:46.758292Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 25 01:16:46.784719 waagent[1903]: 2025-03-25T01:16:46.784681Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Mar 25 01:16:46.790989 waagent[1903]: 2025-03-25T01:16:46.790947Z INFO Daemon Mar 25 01:16:46.794187 waagent[1903]: 2025-03-25T01:16:46.794150Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 323238e4-608f-450b-bf2e-aed85ef84b1c eTag: 9729443357181939536 source: Fabric] Mar 25 01:16:46.807147 waagent[1903]: 2025-03-25T01:16:46.807103Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 25 01:16:46.814872 waagent[1903]: 2025-03-25T01:16:46.814798Z INFO Daemon Mar 25 01:16:46.817949 waagent[1903]: 2025-03-25T01:16:46.817909Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 25 01:16:46.830002 waagent[1903]: 2025-03-25T01:16:46.829967Z INFO Daemon Daemon Downloading artifacts profile blob Mar 25 01:16:46.927192 waagent[1903]: 2025-03-25T01:16:46.925491Z INFO Daemon Downloaded certificate {'thumbprint': '7D49523096C26F3475C38D9203F6783CDC37D87E', 'hasPrivateKey': True} Mar 25 01:16:46.935064 waagent[1903]: 2025-03-25T01:16:46.935016Z INFO Daemon Downloaded certificate {'thumbprint': '2AFC58028CFC880AB23B2AE3F4EF463A0D44BC85', 'hasPrivateKey': False} Mar 25 01:16:46.944450 waagent[1903]: 2025-03-25T01:16:46.944399Z INFO Daemon Fetch goal state completed Mar 25 01:16:46.955033 waagent[1903]: 2025-03-25T01:16:46.954970Z INFO Daemon Daemon Starting provisioning Mar 25 01:16:46.959951 waagent[1903]: 2025-03-25T01:16:46.959908Z INFO Daemon Daemon Handle ovf-env.xml. Mar 25 01:16:46.964353 waagent[1903]: 2025-03-25T01:16:46.964319Z INFO Daemon Daemon Set hostname [ci-4284.0.0-a-be6d65597e] Mar 25 01:16:46.987356 waagent[1903]: 2025-03-25T01:16:46.987294Z INFO Daemon Daemon Publish hostname [ci-4284.0.0-a-be6d65597e] Mar 25 01:16:46.993643 waagent[1903]: 2025-03-25T01:16:46.993593Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 25 01:16:46.999715 waagent[1903]: 2025-03-25T01:16:46.999677Z INFO Daemon Daemon Primary interface is [eth0] Mar 25 01:16:47.011959 systemd-networkd[1487]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:16:47.011966 systemd-networkd[1487]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:16:47.011993 systemd-networkd[1487]: eth0: DHCP lease lost Mar 25 01:16:47.013077 waagent[1903]: 2025-03-25T01:16:47.013027Z INFO Daemon Daemon Create user account if not exists Mar 25 01:16:47.018307 waagent[1903]: 2025-03-25T01:16:47.018268Z INFO Daemon Daemon User core already exists, skip useradd Mar 25 01:16:47.023856 waagent[1903]: 2025-03-25T01:16:47.023816Z INFO Daemon Daemon Configure sudoer Mar 25 01:16:47.032436 waagent[1903]: 2025-03-25T01:16:47.028575Z INFO Daemon Daemon Configure sshd Mar 25 01:16:47.032928 waagent[1903]: 2025-03-25T01:16:47.032884Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 25 01:16:47.046186 waagent[1903]: 2025-03-25T01:16:47.046131Z INFO Daemon Daemon Deploy ssh public key. Mar 25 01:16:47.060534 systemd-networkd[1487]: eth0: DHCPv4 address 10.200.20.47/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 25 01:16:48.142182 waagent[1903]: 2025-03-25T01:16:48.142137Z INFO Daemon Daemon Provisioning complete Mar 25 01:16:48.159351 waagent[1903]: 2025-03-25T01:16:48.159310Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 25 01:16:48.165749 waagent[1903]: 2025-03-25T01:16:48.165707Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 25 01:16:48.175702 waagent[1903]: 2025-03-25T01:16:48.175659Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Mar 25 01:16:48.306800 waagent[1989]: 2025-03-25T01:16:48.306248Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Mar 25 01:16:48.306800 waagent[1989]: 2025-03-25T01:16:48.306380Z INFO ExtHandler ExtHandler OS: flatcar 4284.0.0 Mar 25 01:16:48.306800 waagent[1989]: 2025-03-25T01:16:48.306423Z INFO ExtHandler ExtHandler Python: 3.11.11 Mar 25 01:16:48.306800 waagent[1989]: 2025-03-25T01:16:48.306493Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Mar 25 01:16:48.359493 waagent[1989]: 2025-03-25T01:16:48.359409Z INFO ExtHandler ExtHandler Distro: flatcar-4284.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 25 01:16:48.359821 waagent[1989]: 2025-03-25T01:16:48.359793Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 25 01:16:48.359939 waagent[1989]: 2025-03-25T01:16:48.359917Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 25 01:16:48.366866 waagent[1989]: 2025-03-25T01:16:48.366814Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 25 01:16:48.372971 waagent[1989]: 2025-03-25T01:16:48.372934Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Mar 25 01:16:48.374470 waagent[1989]: 2025-03-25T01:16:48.373562Z INFO ExtHandler Mar 25 01:16:48.374470 waagent[1989]: 2025-03-25T01:16:48.373636Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ba5777fc-73da-4d76-b26a-44d6b1d75742 eTag: 9729443357181939536 source: Fabric] Mar 25 01:16:48.374470 waagent[1989]: 2025-03-25T01:16:48.373890Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 25 01:16:48.374470 waagent[1989]: 2025-03-25T01:16:48.374350Z INFO ExtHandler Mar 25 01:16:48.374470 waagent[1989]: 2025-03-25T01:16:48.374400Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 25 01:16:48.378876 waagent[1989]: 2025-03-25T01:16:48.378849Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 25 01:16:48.503586 waagent[1989]: 2025-03-25T01:16:48.503505Z INFO ExtHandler Downloaded certificate {'thumbprint': '7D49523096C26F3475C38D9203F6783CDC37D87E', 'hasPrivateKey': True} Mar 25 01:16:48.503999 waagent[1989]: 2025-03-25T01:16:48.503961Z INFO ExtHandler Downloaded certificate {'thumbprint': '2AFC58028CFC880AB23B2AE3F4EF463A0D44BC85', 'hasPrivateKey': False} Mar 25 01:16:48.504379 waagent[1989]: 2025-03-25T01:16:48.504344Z INFO ExtHandler Fetch goal state completed Mar 25 01:16:48.520634 waagent[1989]: 2025-03-25T01:16:48.520570Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Mar 25 01:16:48.525080 waagent[1989]: 2025-03-25T01:16:48.525016Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1989 Mar 25 01:16:48.525218 waagent[1989]: 2025-03-25T01:16:48.525185Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 25 01:16:48.525582 waagent[1989]: 2025-03-25T01:16:48.525549Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Mar 25 01:16:48.527027 waagent[1989]: 2025-03-25T01:16:48.526990Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4284.0.0', '', 'Flatcar Container Linux by Kinvolk'] Mar 25 01:16:48.527411 waagent[1989]: 2025-03-25T01:16:48.527378Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4284.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 25 01:16:48.527586 waagent[1989]: 2025-03-25T01:16:48.527554Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 25 01:16:48.528158 waagent[1989]: 2025-03-25T01:16:48.528125Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 25 01:16:52.066890 waagent[1989]: 2025-03-25T01:16:52.066845Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 25 01:16:52.557024 waagent[1989]: 2025-03-25T01:16:52.556111Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 25 01:16:52.562766 waagent[1989]: 2025-03-25T01:16:52.562736Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 25 01:16:52.569406 systemd[1]: Reload requested from client PID 2010 ('systemctl') (unit waagent.service)... Mar 25 01:16:52.569423 systemd[1]: Reloading... Mar 25 01:16:52.659528 zram_generator::config[2055]: No configuration found. Mar 25 01:16:52.763603 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:16:52.879827 systemd[1]: Reloading finished in 310 ms. Mar 25 01:16:52.897263 waagent[1989]: 2025-03-25T01:16:52.895802Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 25 01:16:52.897263 waagent[1989]: 2025-03-25T01:16:52.895945Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 25 01:16:55.652029 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 25 01:16:55.653598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:17:02.473003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:17:02.478761 (kubelet)[2115]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:17:02.514797 kubelet[2115]: E0325 01:17:02.514747 2115 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:17:02.518195 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:17:02.518487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:17:02.518838 systemd[1]: kubelet.service: Consumed 128ms CPU time, 96.2M memory peak. Mar 25 01:17:03.737484 waagent[1989]: 2025-03-25T01:17:03.737380Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 25 01:17:03.737789 waagent[1989]: 2025-03-25T01:17:03.737753Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 25 01:17:03.738517 waagent[1989]: 2025-03-25T01:17:03.738424Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 25 01:17:03.738898 waagent[1989]: 2025-03-25T01:17:03.738805Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 25 01:17:03.739325 waagent[1989]: 2025-03-25T01:17:03.739226Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 25 01:17:03.739464 waagent[1989]: 2025-03-25T01:17:03.739326Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 25 01:17:03.740608 waagent[1989]: 2025-03-25T01:17:03.739832Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 25 01:17:03.740608 waagent[1989]: 2025-03-25T01:17:03.739914Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 25 01:17:03.740608 waagent[1989]: 2025-03-25T01:17:03.740049Z INFO EnvHandler ExtHandler Configure routes Mar 25 01:17:03.740608 waagent[1989]: 2025-03-25T01:17:03.740102Z INFO EnvHandler ExtHandler Gateway:None Mar 25 01:17:03.740608 waagent[1989]: 2025-03-25T01:17:03.740140Z INFO EnvHandler ExtHandler Routes:None Mar 25 01:17:03.740924 waagent[1989]: 2025-03-25T01:17:03.740877Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 25 01:17:03.741109 waagent[1989]: 2025-03-25T01:17:03.741081Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 25 01:17:03.741611 waagent[1989]: 2025-03-25T01:17:03.741584Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 25 01:17:03.741772 waagent[1989]: 2025-03-25T01:17:03.741740Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 25 01:17:03.742060 waagent[1989]: 2025-03-25T01:17:03.742014Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 25 01:17:03.742562 waagent[1989]: 2025-03-25T01:17:03.742506Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 25 01:17:03.743828 waagent[1989]: 2025-03-25T01:17:03.743787Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 25 01:17:03.743828 waagent[1989]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 25 01:17:03.743828 waagent[1989]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 25 01:17:03.743828 waagent[1989]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 25 01:17:03.743828 waagent[1989]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 25 01:17:03.743828 waagent[1989]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 25 01:17:03.743828 waagent[1989]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 25 01:17:03.754955 waagent[1989]: 2025-03-25T01:17:03.754910Z INFO ExtHandler ExtHandler Mar 25 01:17:03.756470 waagent[1989]: 2025-03-25T01:17:03.755116Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f5c79db6-0809-4a54-a209-b1492d1b3034 correlation c793f0dd-ba92-4082-a2ea-e12fa2dc509c created: 2025-03-25T01:15:32.268623Z] Mar 25 01:17:03.756470 waagent[1989]: 2025-03-25T01:17:03.755484Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 25 01:17:03.756470 waagent[1989]: 2025-03-25T01:17:03.756053Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Mar 25 01:17:03.830648 waagent[1989]: 2025-03-25T01:17:03.830587Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 8E4A3ADA-7E4D-4B44-AA89-AD7A3C08DE51;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Mar 25 01:17:03.869422 waagent[1989]: 2025-03-25T01:17:03.869364Z INFO MonitorHandler ExtHandler Network interfaces: Mar 25 01:17:03.869422 waagent[1989]: Executing ['ip', '-a', '-o', 'link']: Mar 25 01:17:03.869422 waagent[1989]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 25 01:17:03.869422 waagent[1989]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:a3:74 brd ff:ff:ff:ff:ff:ff Mar 25 01:17:03.869422 waagent[1989]: 3: enP57511s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:a3:74 brd ff:ff:ff:ff:ff:ff\ altname enP57511p0s2 Mar 25 01:17:03.869422 waagent[1989]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 25 01:17:03.869422 waagent[1989]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 25 01:17:03.869422 waagent[1989]: 2: eth0 inet 10.200.20.47/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 25 01:17:03.869422 waagent[1989]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 25 01:17:03.869422 waagent[1989]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 25 01:17:03.869422 waagent[1989]: 2: eth0 inet6 fe80::20d:3aff:fe07:a374/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 25 01:17:03.869422 waagent[1989]: 3: enP57511s1 inet6 fe80::20d:3aff:fe07:a374/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 25 01:17:04.192678 waagent[1989]: 2025-03-25T01:17:04.192612Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Mar 25 01:17:04.192678 waagent[1989]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 25 01:17:04.192678 waagent[1989]: pkts bytes target prot opt in out source destination Mar 25 01:17:04.192678 waagent[1989]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 25 01:17:04.192678 waagent[1989]: pkts bytes target prot opt in out source destination Mar 25 01:17:04.192678 waagent[1989]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 25 01:17:04.192678 waagent[1989]: pkts bytes target prot opt in out source destination Mar 25 01:17:04.192678 waagent[1989]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 25 01:17:04.192678 waagent[1989]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 25 01:17:04.192678 waagent[1989]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 25 01:17:04.195400 waagent[1989]: 2025-03-25T01:17:04.195343Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 25 01:17:04.195400 waagent[1989]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 25 01:17:04.195400 waagent[1989]: pkts bytes target prot opt in out source destination Mar 25 01:17:04.195400 waagent[1989]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 25 01:17:04.195400 waagent[1989]: pkts bytes target prot opt in out source destination Mar 25 01:17:04.195400 waagent[1989]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 25 01:17:04.195400 waagent[1989]: pkts bytes target prot opt in out source destination Mar 25 01:17:04.195400 waagent[1989]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 25 01:17:04.195400 waagent[1989]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 25 01:17:04.195400 waagent[1989]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 25 01:17:04.195643 waagent[1989]: 2025-03-25T01:17:04.195616Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 25 01:17:07.192735 chronyd[1737]: Selected source PHC0 Mar 25 01:17:12.652151 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 25 01:17:12.653589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:17:12.927801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:17:12.937686 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:17:12.973531 kubelet[2162]: E0325 01:17:12.973439 2162 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:17:12.975990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:17:12.976139 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:17:12.976465 systemd[1]: kubelet.service: Consumed 124ms CPU time, 96.5M memory peak. Mar 25 01:17:19.696711 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 25 01:17:19.697900 systemd[1]: Started sshd@0-10.200.20.47:22-10.200.16.10:44240.service - OpenSSH per-connection server daemon (10.200.16.10:44240). Mar 25 01:17:20.306955 sshd[2171]: Accepted publickey for core from 10.200.16.10 port 44240 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:17:20.308226 sshd-session[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:20.312618 systemd-logind[1741]: New session 3 of user core. Mar 25 01:17:20.322582 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 25 01:17:20.718701 systemd[1]: Started sshd@1-10.200.20.47:22-10.200.16.10:44246.service - OpenSSH per-connection server daemon (10.200.16.10:44246). Mar 25 01:17:21.171934 sshd[2176]: Accepted publickey for core from 10.200.16.10 port 44246 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:17:21.173181 sshd-session[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:21.178526 systemd-logind[1741]: New session 4 of user core. Mar 25 01:17:21.180578 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 25 01:17:21.494426 sshd[2178]: Connection closed by 10.200.16.10 port 44246 Mar 25 01:17:21.494224 sshd-session[2176]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:21.498064 systemd[1]: sshd@1-10.200.20.47:22-10.200.16.10:44246.service: Deactivated successfully. Mar 25 01:17:21.499571 systemd[1]: session-4.scope: Deactivated successfully. Mar 25 01:17:21.500192 systemd-logind[1741]: Session 4 logged out. Waiting for processes to exit. Mar 25 01:17:21.501244 systemd-logind[1741]: Removed session 4. Mar 25 01:17:21.580710 systemd[1]: Started sshd@2-10.200.20.47:22-10.200.16.10:44254.service - OpenSSH per-connection server daemon (10.200.16.10:44254). Mar 25 01:17:22.072290 sshd[2184]: Accepted publickey for core from 10.200.16.10 port 44254 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:17:22.073574 sshd-session[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:22.078504 systemd-logind[1741]: New session 5 of user core. Mar 25 01:17:22.083581 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 25 01:17:22.426096 sshd[2186]: Connection closed by 10.200.16.10 port 44254 Mar 25 01:17:22.426646 sshd-session[2184]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:22.429888 systemd[1]: sshd@2-10.200.20.47:22-10.200.16.10:44254.service: Deactivated successfully. Mar 25 01:17:22.431364 systemd[1]: session-5.scope: Deactivated successfully. Mar 25 01:17:22.432050 systemd-logind[1741]: Session 5 logged out. Waiting for processes to exit. Mar 25 01:17:22.432992 systemd-logind[1741]: Removed session 5. Mar 25 01:17:22.514798 systemd[1]: Started sshd@3-10.200.20.47:22-10.200.16.10:44270.service - OpenSSH per-connection server daemon (10.200.16.10:44270). Mar 25 01:17:23.011140 sshd[2192]: Accepted publickey for core from 10.200.16.10 port 44270 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:17:23.012427 sshd-session[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:23.013319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 25 01:17:23.014789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:17:23.020083 systemd-logind[1741]: New session 6 of user core. Mar 25 01:17:23.028589 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 25 01:17:23.369556 sshd[2197]: Connection closed by 10.200.16.10 port 44270 Mar 25 01:17:23.370158 sshd-session[2192]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:23.373672 systemd[1]: sshd@3-10.200.20.47:22-10.200.16.10:44270.service: Deactivated successfully. Mar 25 01:17:23.375206 systemd[1]: session-6.scope: Deactivated successfully. Mar 25 01:17:23.376709 systemd-logind[1741]: Session 6 logged out. Waiting for processes to exit. Mar 25 01:17:23.377710 systemd-logind[1741]: Removed session 6. Mar 25 01:17:23.422775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:17:23.426545 (kubelet)[2207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:17:23.451880 systemd[1]: Started sshd@4-10.200.20.47:22-10.200.16.10:44272.service - OpenSSH per-connection server daemon (10.200.16.10:44272). Mar 25 01:17:23.467266 kubelet[2207]: E0325 01:17:23.467237 2207 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:17:23.470071 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:17:23.470313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:17:23.470778 systemd[1]: kubelet.service: Consumed 127ms CPU time, 97M memory peak. Mar 25 01:17:23.909545 sshd[2214]: Accepted publickey for core from 10.200.16.10 port 44272 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:17:23.910765 sshd-session[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:23.914878 systemd-logind[1741]: New session 7 of user core. Mar 25 01:17:23.924649 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 25 01:17:24.312436 sudo[2219]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 25 01:17:24.312744 sudo[2219]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:17:24.338150 sudo[2219]: pam_unix(sudo:session): session closed for user root Mar 25 01:17:24.409779 sshd[2218]: Connection closed by 10.200.16.10 port 44272 Mar 25 01:17:24.408945 sshd-session[2214]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:24.412485 systemd[1]: sshd@4-10.200.20.47:22-10.200.16.10:44272.service: Deactivated successfully. Mar 25 01:17:24.413969 systemd[1]: session-7.scope: Deactivated successfully. Mar 25 01:17:24.414693 systemd-logind[1741]: Session 7 logged out. Waiting for processes to exit. Mar 25 01:17:24.415905 systemd-logind[1741]: Removed session 7. Mar 25 01:17:24.488943 systemd[1]: Started sshd@5-10.200.20.47:22-10.200.16.10:44278.service - OpenSSH per-connection server daemon (10.200.16.10:44278). Mar 25 01:17:24.948341 sshd[2225]: Accepted publickey for core from 10.200.16.10 port 44278 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:17:24.949692 sshd-session[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:24.953580 systemd-logind[1741]: New session 8 of user core. Mar 25 01:17:24.961589 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 25 01:17:25.201365 sudo[2229]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 25 01:17:25.201737 sudo[2229]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:17:25.204695 sudo[2229]: pam_unix(sudo:session): session closed for user root Mar 25 01:17:25.208944 sudo[2228]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 25 01:17:25.209194 sudo[2228]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:17:25.217246 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:17:25.250607 augenrules[2251]: No rules Mar 25 01:17:25.251118 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:17:25.251330 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:17:25.252647 sudo[2228]: pam_unix(sudo:session): session closed for user root Mar 25 01:17:25.336123 sshd[2227]: Connection closed by 10.200.16.10 port 44278 Mar 25 01:17:25.336703 sshd-session[2225]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:25.339585 systemd-logind[1741]: Session 8 logged out. Waiting for processes to exit. Mar 25 01:17:25.341218 systemd[1]: sshd@5-10.200.20.47:22-10.200.16.10:44278.service: Deactivated successfully. Mar 25 01:17:25.342862 systemd[1]: session-8.scope: Deactivated successfully. Mar 25 01:17:25.344898 systemd-logind[1741]: Removed session 8. Mar 25 01:17:25.423132 systemd[1]: Started sshd@6-10.200.20.47:22-10.200.16.10:44290.service - OpenSSH per-connection server daemon (10.200.16.10:44290). Mar 25 01:17:25.915045 sshd[2260]: Accepted publickey for core from 10.200.16.10 port 44290 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:17:25.916241 sshd-session[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:25.920576 systemd-logind[1741]: New session 9 of user core. Mar 25 01:17:25.930592 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 25 01:17:26.184687 sudo[2263]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 25 01:17:26.184972 sudo[2263]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:17:27.297907 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 25 01:17:27.308710 (dockerd)[2281]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 25 01:17:27.353476 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 25 01:17:28.000236 dockerd[2281]: time="2025-03-25T01:17:28.000189415Z" level=info msg="Starting up" Mar 25 01:17:28.003179 dockerd[2281]: time="2025-03-25T01:17:28.003147530Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 25 01:17:28.126865 dockerd[2281]: time="2025-03-25T01:17:28.126824279Z" level=info msg="Loading containers: start." Mar 25 01:17:28.244137 update_engine[1744]: I20250325 01:17:28.243588 1744 update_attempter.cc:509] Updating boot flags... Mar 25 01:17:28.302682 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2393) Mar 25 01:17:28.337509 kernel: Initializing XFRM netlink socket Mar 25 01:17:28.493073 systemd-networkd[1487]: docker0: Link UP Mar 25 01:17:28.543611 dockerd[2281]: time="2025-03-25T01:17:28.543561366Z" level=info msg="Loading containers: done." Mar 25 01:17:28.614597 dockerd[2281]: time="2025-03-25T01:17:28.614215325Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 25 01:17:28.614597 dockerd[2281]: time="2025-03-25T01:17:28.614308685Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 25 01:17:28.614597 dockerd[2281]: time="2025-03-25T01:17:28.614433725Z" level=info msg="Daemon has completed initialization" Mar 25 01:17:28.674582 dockerd[2281]: time="2025-03-25T01:17:28.674489382Z" level=info msg="API listen on /run/docker.sock" Mar 25 01:17:28.674944 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 25 01:17:29.957691 containerd[1763]: time="2025-03-25T01:17:29.957645508Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 25 01:17:30.834544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3937510773.mount: Deactivated successfully. Mar 25 01:17:32.265486 containerd[1763]: time="2025-03-25T01:17:32.265211407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:32.271627 containerd[1763]: time="2025-03-25T01:17:32.271583679Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=29793524" Mar 25 01:17:32.280845 containerd[1763]: time="2025-03-25T01:17:32.280546107Z" level=info msg="ImageCreate event name:\"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:32.290485 containerd[1763]: time="2025-03-25T01:17:32.290281493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:32.291236 containerd[1763]: time="2025-03-25T01:17:32.290805333Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"29790324\" in 2.333115585s" Mar 25 01:17:32.291236 containerd[1763]: time="2025-03-25T01:17:32.290836933Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 25 01:17:32.306176 containerd[1763]: time="2025-03-25T01:17:32.306130272Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 25 01:17:33.652027 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 25 01:17:33.653401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:17:33.773725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:17:33.782770 (kubelet)[2616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:17:33.843535 kubelet[2616]: E0325 01:17:33.843308 2616 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:17:33.846181 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:17:33.846903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:17:33.847327 systemd[1]: kubelet.service: Consumed 127ms CPU time, 96.6M memory peak. Mar 25 01:17:34.242689 containerd[1763]: time="2025-03-25T01:17:34.242639153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:34.247791 containerd[1763]: time="2025-03-25T01:17:34.247762466Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=26861167" Mar 25 01:17:34.252648 containerd[1763]: time="2025-03-25T01:17:34.252592340Z" level=info msg="ImageCreate event name:\"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:34.261222 containerd[1763]: time="2025-03-25T01:17:34.261172008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:34.262115 containerd[1763]: time="2025-03-25T01:17:34.262003527Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"28301963\" in 1.955664136s" Mar 25 01:17:34.262115 containerd[1763]: time="2025-03-25T01:17:34.262033567Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 25 01:17:34.277760 containerd[1763]: time="2025-03-25T01:17:34.277730106Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 25 01:17:36.000488 containerd[1763]: time="2025-03-25T01:17:36.000176239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:36.003375 containerd[1763]: time="2025-03-25T01:17:36.003135515Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=16264636" Mar 25 01:17:36.008878 containerd[1763]: time="2025-03-25T01:17:36.008833267Z" level=info msg="ImageCreate event name:\"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:36.015572 containerd[1763]: time="2025-03-25T01:17:36.015519218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:36.016986 containerd[1763]: time="2025-03-25T01:17:36.016603176Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"17705450\" in 1.738781951s" Mar 25 01:17:36.016986 containerd[1763]: time="2025-03-25T01:17:36.016635216Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 25 01:17:36.031199 containerd[1763]: time="2025-03-25T01:17:36.031138077Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 25 01:17:37.258551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount483500521.mount: Deactivated successfully. Mar 25 01:17:37.599292 containerd[1763]: time="2025-03-25T01:17:37.599161498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:37.604317 containerd[1763]: time="2025-03-25T01:17:37.604107409Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=25771848" Mar 25 01:17:37.607540 containerd[1763]: time="2025-03-25T01:17:37.607511443Z" level=info msg="ImageCreate event name:\"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:37.614013 containerd[1763]: time="2025-03-25T01:17:37.613977992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:37.614792 containerd[1763]: time="2025-03-25T01:17:37.614670951Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"25770867\" in 1.583389115s" Mar 25 01:17:37.614792 containerd[1763]: time="2025-03-25T01:17:37.614702431Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 25 01:17:37.634964 containerd[1763]: time="2025-03-25T01:17:37.634724676Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 25 01:17:38.350253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1302676100.mount: Deactivated successfully. Mar 25 01:17:39.422640 containerd[1763]: time="2025-03-25T01:17:39.422533575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:39.431567 containerd[1763]: time="2025-03-25T01:17:39.431488079Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Mar 25 01:17:39.440560 containerd[1763]: time="2025-03-25T01:17:39.440510423Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:39.447712 containerd[1763]: time="2025-03-25T01:17:39.447663251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:39.448592 containerd[1763]: time="2025-03-25T01:17:39.448563289Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.813798453s" Mar 25 01:17:39.448864 containerd[1763]: time="2025-03-25T01:17:39.448672089Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 25 01:17:39.463470 containerd[1763]: time="2025-03-25T01:17:39.463289663Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 25 01:17:40.622735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3796816579.mount: Deactivated successfully. Mar 25 01:17:40.664749 containerd[1763]: time="2025-03-25T01:17:40.664702815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:40.668262 containerd[1763]: time="2025-03-25T01:17:40.668215048Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Mar 25 01:17:40.681753 containerd[1763]: time="2025-03-25T01:17:40.681692624Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:40.696860 containerd[1763]: time="2025-03-25T01:17:40.696804918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:40.697944 containerd[1763]: time="2025-03-25T01:17:40.697430797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 1.234107934s" Mar 25 01:17:40.697944 containerd[1763]: time="2025-03-25T01:17:40.697476116Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 25 01:17:40.712381 containerd[1763]: time="2025-03-25T01:17:40.712096491Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 25 01:17:41.585256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount130057016.mount: Deactivated successfully. Mar 25 01:17:43.902071 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 25 01:17:43.905628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:17:44.018562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:17:44.025745 (kubelet)[2772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:17:44.061625 kubelet[2772]: E0325 01:17:44.061570 2772 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:17:44.064150 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:17:44.064287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:17:44.064796 systemd[1]: kubelet.service: Consumed 120ms CPU time, 94.4M memory peak. Mar 25 01:17:44.475597 containerd[1763]: time="2025-03-25T01:17:44.475542263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:44.480537 containerd[1763]: time="2025-03-25T01:17:44.480469335Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Mar 25 01:17:44.486138 containerd[1763]: time="2025-03-25T01:17:44.486085005Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:44.493035 containerd[1763]: time="2025-03-25T01:17:44.492974512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:44.494025 containerd[1763]: time="2025-03-25T01:17:44.493899911Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.781766421s" Mar 25 01:17:44.494025 containerd[1763]: time="2025-03-25T01:17:44.493932511Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 25 01:17:49.520279 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:17:49.520877 systemd[1]: kubelet.service: Consumed 120ms CPU time, 94.4M memory peak. Mar 25 01:17:49.522856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:17:49.541208 systemd[1]: Reload requested from client PID 2864 ('systemctl') (unit session-9.scope)... Mar 25 01:17:49.541342 systemd[1]: Reloading... Mar 25 01:17:49.645501 zram_generator::config[2914]: No configuration found. Mar 25 01:17:49.752773 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:17:49.875181 systemd[1]: Reloading finished in 333 ms. Mar 25 01:17:49.924705 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 25 01:17:49.924795 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 25 01:17:49.925067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:17:49.925123 systemd[1]: kubelet.service: Consumed 81ms CPU time, 82.4M memory peak. Mar 25 01:17:49.927337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:17:50.034650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:17:50.041893 (kubelet)[2979]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:17:50.080743 kubelet[2979]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:17:50.081052 kubelet[2979]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 01:17:50.081097 kubelet[2979]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:17:50.081214 kubelet[2979]: I0325 01:17:50.081187 2979 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:17:50.719480 kubelet[2979]: I0325 01:17:50.719001 2979 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 25 01:17:50.719480 kubelet[2979]: I0325 01:17:50.719027 2979 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:17:50.719480 kubelet[2979]: I0325 01:17:50.719241 2979 server.go:927] "Client rotation is on, will bootstrap in background" Mar 25 01:17:50.730888 kubelet[2979]: E0325 01:17:50.730832 2979 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.47:6443: connect: connection refused Mar 25 01:17:50.732647 kubelet[2979]: I0325 01:17:50.732527 2979 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:17:50.741305 kubelet[2979]: I0325 01:17:50.741095 2979 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:17:50.742766 kubelet[2979]: I0325 01:17:50.742714 2979 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:17:50.742952 kubelet[2979]: I0325 01:17:50.742771 2979 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-a-be6d65597e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 25 01:17:50.743038 kubelet[2979]: I0325 01:17:50.742964 2979 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:17:50.743038 kubelet[2979]: I0325 01:17:50.742973 2979 container_manager_linux.go:301] "Creating device plugin manager" Mar 25 01:17:50.743116 kubelet[2979]: I0325 01:17:50.743097 2979 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:17:50.743849 kubelet[2979]: I0325 01:17:50.743830 2979 kubelet.go:400] "Attempting to sync node with API server" Mar 25 01:17:50.743876 kubelet[2979]: I0325 01:17:50.743853 2979 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:17:50.744258 kubelet[2979]: I0325 01:17:50.744208 2979 kubelet.go:312] "Adding apiserver pod source" Mar 25 01:17:50.744258 kubelet[2979]: I0325 01:17:50.744235 2979 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:17:50.744980 kubelet[2979]: W0325 01:17:50.744441 2979 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-a-be6d65597e&limit=500&resourceVersion=0": dial tcp 10.200.20.47:6443: connect: connection refused Mar 25 01:17:50.744980 kubelet[2979]: E0325 01:17:50.744513 2979 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-a-be6d65597e&limit=500&resourceVersion=0": dial tcp 10.200.20.47:6443: connect: connection refused Mar 25 01:17:50.744980 kubelet[2979]: W0325 01:17:50.744923 2979 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.47:6443: connect: connection refused Mar 25 01:17:50.744980 kubelet[2979]: E0325 01:17:50.744966 2979 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.47:6443: connect: connection refused Mar 25 01:17:50.745124 kubelet[2979]: I0325 01:17:50.745059 2979 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:17:50.745265 kubelet[2979]: I0325 01:17:50.745242 2979 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:17:50.745296 kubelet[2979]: W0325 01:17:50.745290 2979 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 25 01:17:50.747477 kubelet[2979]: I0325 01:17:50.746002 2979 server.go:1264] "Started kubelet" Mar 25 01:17:50.747477 kubelet[2979]: I0325 01:17:50.747320 2979 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:17:50.751153 kubelet[2979]: I0325 01:17:50.751113 2979 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:17:50.752318 kubelet[2979]: I0325 01:17:50.752302 2979 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 25 01:17:50.753017 kubelet[2979]: I0325 01:17:50.752988 2979 server.go:455] "Adding debug handlers to kubelet server" Mar 25 01:17:50.753890 kubelet[2979]: I0325 01:17:50.753823 2979 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:17:50.754049 kubelet[2979]: I0325 01:17:50.754024 2979 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:17:50.755755 kubelet[2979]: I0325 01:17:50.755737 2979 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 25 01:17:50.756857 kubelet[2979]: I0325 01:17:50.756842 2979 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:17:50.757502 kubelet[2979]: E0325 01:17:50.756938 2979 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.47:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.47:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-a-be6d65597e.182fe6e95b1bb86c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-a-be6d65597e,UID:ci-4284.0.0-a-be6d65597e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-a-be6d65597e,},FirstTimestamp:2025-03-25 01:17:50.745983084 +0000 UTC m=+0.700649050,LastTimestamp:2025-03-25 01:17:50.745983084 +0000 UTC m=+0.700649050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-a-be6d65597e,}" Mar 25 01:17:50.757608 kubelet[2979]: E0325 01:17:50.757560 2979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-a-be6d65597e?timeout=10s\": dial tcp 10.200.20.47:6443: connect: connection refused" interval="200ms" Mar 25 01:17:50.758697 kubelet[2979]: I0325 01:17:50.758667 2979 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:17:50.758779 kubelet[2979]: I0325 01:17:50.758756 2979 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:17:50.760935 kubelet[2979]: W0325 01:17:50.760120 2979 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.47:6443: connect: connection refused Mar 25 01:17:50.760935 kubelet[2979]: E0325 01:17:50.760185 2979 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.47:6443: connect: connection refused Mar 25 01:17:50.760935 kubelet[2979]: I0325 01:17:50.760526 2979 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:17:50.769594 kubelet[2979]: E0325 01:17:50.769416 2979 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:17:50.774587 kubelet[2979]: I0325 01:17:50.774566 2979 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 01:17:50.774587 kubelet[2979]: I0325 01:17:50.774583 2979 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 01:17:50.774684 kubelet[2979]: I0325 01:17:50.774605 2979 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:17:50.778071 kubelet[2979]: I0325 01:17:50.778039 2979 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:17:50.784648 kubelet[2979]: I0325 01:17:50.778943 2979 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:17:50.784648 kubelet[2979]: I0325 01:17:50.778970 2979 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 01:17:50.784648 kubelet[2979]: I0325 01:17:50.778984 2979 kubelet.go:2337] "Starting kubelet main sync loop" Mar 25 01:17:50.784648 kubelet[2979]: E0325 01:17:50.779021 2979 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:17:50.784648 kubelet[2979]: W0325 01:17:50.781297 2979 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.47:6443: connect: connection refused Mar 25 01:17:50.784648 kubelet[2979]: E0325 01:17:50.781341 2979 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.47:6443: connect: connection refused Mar 25 01:17:50.785929 kubelet[2979]: I0325 01:17:50.785908 2979 policy_none.go:49] "None policy: Start" Mar 25 01:17:50.786767 kubelet[2979]: I0325 01:17:50.786495 2979 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 01:17:50.786767 kubelet[2979]: I0325 01:17:50.786519 2979 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:17:50.798267 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 25 01:17:50.808886 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 25 01:17:50.813576 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 25 01:17:50.825486 kubelet[2979]: I0325 01:17:50.825464 2979 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:17:50.825934 kubelet[2979]: I0325 01:17:50.825817 2979 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:17:50.826016 kubelet[2979]: I0325 01:17:50.826006 2979 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:17:50.828352 kubelet[2979]: E0325 01:17:50.828329 2979 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284.0.0-a-be6d65597e\" not found" Mar 25 01:17:50.854886 kubelet[2979]: I0325 01:17:50.854830 2979 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-a-be6d65597e" Mar 25 01:17:50.855187 kubelet[2979]: E0325 01:17:50.855156 2979 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.47:6443/api/v1/nodes\": dial tcp 10.200.20.47:6443: connect: connection refused" node="ci-4284.0.0-a-be6d65597e" Mar 25 01:17:50.879802 kubelet[2979]: I0325 01:17:50.879528 2979 topology_manager.go:215] "Topology Admit Handler" podUID="eaa3275617f36b79a25afb0765e9a484" podNamespace="kube-system" podName="kube-apiserver-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:50.881267 kubelet[2979]: I0325 01:17:50.881246 2979 topology_manager.go:215] "Topology Admit Handler" podUID="83336740cceec92172d547b0ccf85257" podNamespace="kube-system" podName="kube-controller-manager-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:50.882912 kubelet[2979]: I0325 01:17:50.882896 2979 topology_manager.go:215] "Topology Admit Handler" podUID="d78cc89c7011463362132633b985ba1f" podNamespace="kube-system" podName="kube-scheduler-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:50.891077 systemd[1]: Created slice kubepods-burstable-podeaa3275617f36b79a25afb0765e9a484.slice - libcontainer container kubepods-burstable-podeaa3275617f36b79a25afb0765e9a484.slice. Mar 25 01:17:50.908765 systemd[1]: Created slice kubepods-burstable-pod83336740cceec92172d547b0ccf85257.slice - libcontainer container kubepods-burstable-pod83336740cceec92172d547b0ccf85257.slice. Mar 25 01:17:50.922888 systemd[1]: Created slice kubepods-burstable-podd78cc89c7011463362132633b985ba1f.slice - libcontainer container kubepods-burstable-podd78cc89c7011463362132633b985ba1f.slice. Mar 25 01:17:50.958016 kubelet[2979]: I0325 01:17:50.957773 2979 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83336740cceec92172d547b0ccf85257-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-a-be6d65597e\" (UID: \"83336740cceec92172d547b0ccf85257\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:50.958016 kubelet[2979]: I0325 01:17:50.957809 2979 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d78cc89c7011463362132633b985ba1f-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-a-be6d65597e\" (UID: \"d78cc89c7011463362132633b985ba1f\") " pod="kube-system/kube-scheduler-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:50.958016 kubelet[2979]: I0325 01:17:50.957827 2979 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eaa3275617f36b79a25afb0765e9a484-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-a-be6d65597e\" (UID: \"eaa3275617f36b79a25afb0765e9a484\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:50.958016 kubelet[2979]: I0325 01:17:50.957842 2979 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83336740cceec92172d547b0ccf85257-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-a-be6d65597e\" (UID: \"83336740cceec92172d547b0ccf85257\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:50.958016 kubelet[2979]: I0325 01:17:50.957860 2979 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/83336740cceec92172d547b0ccf85257-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-a-be6d65597e\" (UID: \"83336740cceec92172d547b0ccf85257\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:50.958229 kubelet[2979]: I0325 01:17:50.957875 2979 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83336740cceec92172d547b0ccf85257-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-a-be6d65597e\" (UID: \"83336740cceec92172d547b0ccf85257\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:50.958229 kubelet[2979]: I0325 01:17:50.957892 2979 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eaa3275617f36b79a25afb0765e9a484-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-a-be6d65597e\" (UID: \"eaa3275617f36b79a25afb0765e9a484\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:50.958229 kubelet[2979]: I0325 01:17:50.957919 2979 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eaa3275617f36b79a25afb0765e9a484-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-a-be6d65597e\" (UID: \"eaa3275617f36b79a25afb0765e9a484\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:50.958229 kubelet[2979]: I0325 01:17:50.957939 2979 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/83336740cceec92172d547b0ccf85257-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-a-be6d65597e\" (UID: \"83336740cceec92172d547b0ccf85257\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:50.958229 kubelet[2979]: E0325 01:17:50.957980 2979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-a-be6d65597e?timeout=10s\": dial tcp 10.200.20.47:6443: connect: connection refused" interval="400ms" Mar 25 01:17:51.058531 kubelet[2979]: I0325 01:17:51.057904 2979 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-a-be6d65597e" Mar 25 01:17:51.058725 kubelet[2979]: E0325 01:17:51.058693 2979 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.47:6443/api/v1/nodes\": dial tcp 10.200.20.47:6443: connect: connection refused" node="ci-4284.0.0-a-be6d65597e" Mar 25 01:17:51.207315 containerd[1763]: time="2025-03-25T01:17:51.207012914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-a-be6d65597e,Uid:eaa3275617f36b79a25afb0765e9a484,Namespace:kube-system,Attempt:0,}" Mar 25 01:17:51.221274 containerd[1763]: time="2025-03-25T01:17:51.221232409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-a-be6d65597e,Uid:83336740cceec92172d547b0ccf85257,Namespace:kube-system,Attempt:0,}" Mar 25 01:17:51.226039 containerd[1763]: time="2025-03-25T01:17:51.226012240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-a-be6d65597e,Uid:d78cc89c7011463362132633b985ba1f,Namespace:kube-system,Attempt:0,}" Mar 25 01:17:51.359348 kubelet[2979]: E0325 01:17:51.359223 2979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-a-be6d65597e?timeout=10s\": dial tcp 10.200.20.47:6443: connect: connection refused" interval="800ms" Mar 25 01:17:51.461197 kubelet[2979]: I0325 01:17:51.461169 2979 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-a-be6d65597e" Mar 25 01:17:51.461518 kubelet[2979]: E0325 01:17:51.461487 2979 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.47:6443/api/v1/nodes\": dial tcp 10.200.20.47:6443: connect: connection refused" node="ci-4284.0.0-a-be6d65597e" Mar 25 01:17:51.856513 kubelet[2979]: W0325 01:17:51.856398 2979 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.47:6443: connect: connection refused Mar 25 01:17:51.856513 kubelet[2979]: E0325 01:17:51.856485 2979 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.47:6443: connect: connection refused Mar 25 01:17:51.926077 kubelet[2979]: W0325 01:17:51.925993 2979 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-a-be6d65597e&limit=500&resourceVersion=0": dial tcp 10.200.20.47:6443: connect: connection refused Mar 25 01:17:51.926077 kubelet[2979]: E0325 01:17:51.926055 2979 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-a-be6d65597e&limit=500&resourceVersion=0": dial tcp 10.200.20.47:6443: connect: connection refused Mar 25 01:17:52.001777 kubelet[2979]: W0325 01:17:52.001720 2979 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.47:6443: connect: connection refused Mar 25 01:17:52.001777 kubelet[2979]: E0325 01:17:52.001785 2979 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.47:6443: connect: connection refused Mar 25 01:17:52.062369 kubelet[2979]: W0325 01:17:52.062307 2979 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.47:6443: connect: connection refused Mar 25 01:17:52.062369 kubelet[2979]: E0325 01:17:52.062373 2979 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.47:6443: connect: connection refused Mar 25 01:17:52.160119 kubelet[2979]: E0325 01:17:52.160068 2979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-a-be6d65597e?timeout=10s\": dial tcp 10.200.20.47:6443: connect: connection refused" interval="1.6s" Mar 25 01:17:52.263048 kubelet[2979]: I0325 01:17:52.263001 2979 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-a-be6d65597e" Mar 25 01:17:52.263419 kubelet[2979]: E0325 01:17:52.263353 2979 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.47:6443/api/v1/nodes\": dial tcp 10.200.20.47:6443: connect: connection refused" node="ci-4284.0.0-a-be6d65597e" Mar 25 01:17:52.519819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3673143340.mount: Deactivated successfully. Mar 25 01:17:52.555277 containerd[1763]: time="2025-03-25T01:17:52.555227064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:17:52.564009 containerd[1763]: time="2025-03-25T01:17:52.563948489Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 25 01:17:52.583486 containerd[1763]: time="2025-03-25T01:17:52.583296215Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:17:52.588154 containerd[1763]: time="2025-03-25T01:17:52.588110246Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:17:52.592082 containerd[1763]: time="2025-03-25T01:17:52.592033839Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 25 01:17:52.609534 containerd[1763]: time="2025-03-25T01:17:52.609498329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:17:52.610227 containerd[1763]: time="2025-03-25T01:17:52.610193927Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 749.273003ms" Mar 25 01:17:52.618001 containerd[1763]: time="2025-03-25T01:17:52.617404115Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:17:52.636692 containerd[1763]: time="2025-03-25T01:17:52.636632361Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 25 01:17:52.637048 containerd[1763]: time="2025-03-25T01:17:52.637020480Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 745.666769ms" Mar 25 01:17:52.673268 containerd[1763]: time="2025-03-25T01:17:52.673221337Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 775.574157ms" Mar 25 01:17:52.676093 containerd[1763]: time="2025-03-25T01:17:52.676055292Z" level=info msg="connecting to shim c95d29fcd02e018a61439c9558466d258ef9fab8f5835374142b825b2c1f7d90" address="unix:///run/containerd/s/defdcb5d3f72b932fb46c771290d54bfad371dd624192bb5ef7882c2a7101e7c" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:17:52.699602 systemd[1]: Started cri-containerd-c95d29fcd02e018a61439c9558466d258ef9fab8f5835374142b825b2c1f7d90.scope - libcontainer container c95d29fcd02e018a61439c9558466d258ef9fab8f5835374142b825b2c1f7d90. Mar 25 01:17:52.756699 containerd[1763]: time="2025-03-25T01:17:52.756610910Z" level=info msg="connecting to shim 8c80ecc2db48fef7de9bca96b219fed45f564126689cfa072f59328206d49a4d" address="unix:///run/containerd/s/06c558221b6afd5d4c64f34afc42c2326e5f1b6b7a2ef1b7f43f1e069bfbf38a" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:17:52.759076 containerd[1763]: time="2025-03-25T01:17:52.758931146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-a-be6d65597e,Uid:eaa3275617f36b79a25afb0765e9a484,Namespace:kube-system,Attempt:0,} returns sandbox id \"c95d29fcd02e018a61439c9558466d258ef9fab8f5835374142b825b2c1f7d90\"" Mar 25 01:17:52.763523 containerd[1763]: time="2025-03-25T01:17:52.763493138Z" level=info msg="CreateContainer within sandbox \"c95d29fcd02e018a61439c9558466d258ef9fab8f5835374142b825b2c1f7d90\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 25 01:17:52.775370 containerd[1763]: time="2025-03-25T01:17:52.774024399Z" level=info msg="connecting to shim 77b744f41c3cf6341701a2a2189dd39591858b7085f339e7bf3f275077e7cf0e" address="unix:///run/containerd/s/bf4008d30dddcac410d6b4e5ae3ac9f8f3842561123fd535f0d2436cbf7f562c" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:17:52.777658 systemd[1]: Started cri-containerd-8c80ecc2db48fef7de9bca96b219fed45f564126689cfa072f59328206d49a4d.scope - libcontainer container 8c80ecc2db48fef7de9bca96b219fed45f564126689cfa072f59328206d49a4d. Mar 25 01:17:52.789629 kubelet[2979]: E0325 01:17:52.789596 2979 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.47:6443: connect: connection refused Mar 25 01:17:52.801631 systemd[1]: Started cri-containerd-77b744f41c3cf6341701a2a2189dd39591858b7085f339e7bf3f275077e7cf0e.scope - libcontainer container 77b744f41c3cf6341701a2a2189dd39591858b7085f339e7bf3f275077e7cf0e. Mar 25 01:17:52.805203 containerd[1763]: time="2025-03-25T01:17:52.805161345Z" level=info msg="Container 2be3bef800d3ae0c7e2e024fa65f79d29c3ed11d23d745c63d850c991f9b5c57: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:17:52.833145 containerd[1763]: time="2025-03-25T01:17:52.833001096Z" level=info msg="CreateContainer within sandbox \"c95d29fcd02e018a61439c9558466d258ef9fab8f5835374142b825b2c1f7d90\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2be3bef800d3ae0c7e2e024fa65f79d29c3ed11d23d745c63d850c991f9b5c57\"" Mar 25 01:17:52.833766 containerd[1763]: time="2025-03-25T01:17:52.833727615Z" level=info msg="StartContainer for \"2be3bef800d3ae0c7e2e024fa65f79d29c3ed11d23d745c63d850c991f9b5c57\"" Mar 25 01:17:52.837180 containerd[1763]: time="2025-03-25T01:17:52.836438090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-a-be6d65597e,Uid:83336740cceec92172d547b0ccf85257,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c80ecc2db48fef7de9bca96b219fed45f564126689cfa072f59328206d49a4d\"" Mar 25 01:17:52.838628 containerd[1763]: time="2025-03-25T01:17:52.838164727Z" level=info msg="connecting to shim 2be3bef800d3ae0c7e2e024fa65f79d29c3ed11d23d745c63d850c991f9b5c57" address="unix:///run/containerd/s/defdcb5d3f72b932fb46c771290d54bfad371dd624192bb5ef7882c2a7101e7c" protocol=ttrpc version=3 Mar 25 01:17:52.838828 containerd[1763]: time="2025-03-25T01:17:52.838785286Z" level=info msg="CreateContainer within sandbox \"8c80ecc2db48fef7de9bca96b219fed45f564126689cfa072f59328206d49a4d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 25 01:17:52.856508 containerd[1763]: time="2025-03-25T01:17:52.856463575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-a-be6d65597e,Uid:d78cc89c7011463362132633b985ba1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"77b744f41c3cf6341701a2a2189dd39591858b7085f339e7bf3f275077e7cf0e\"" Mar 25 01:17:52.858740 systemd[1]: Started cri-containerd-2be3bef800d3ae0c7e2e024fa65f79d29c3ed11d23d745c63d850c991f9b5c57.scope - libcontainer container 2be3bef800d3ae0c7e2e024fa65f79d29c3ed11d23d745c63d850c991f9b5c57. Mar 25 01:17:52.859230 containerd[1763]: time="2025-03-25T01:17:52.858962330Z" level=info msg="CreateContainer within sandbox \"77b744f41c3cf6341701a2a2189dd39591858b7085f339e7bf3f275077e7cf0e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 25 01:17:52.884476 containerd[1763]: time="2025-03-25T01:17:52.884076446Z" level=info msg="Container 229b6381443a02cc1acdcdbda013ee2a1ad724683c6ab2babef81b509d7a8bd1: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:17:52.897191 containerd[1763]: time="2025-03-25T01:17:52.897154903Z" level=info msg="Container b3fc37584febb053396d4535ca19dc359ed9372490bc81822e83fd860fd0e3bb: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:17:52.902467 containerd[1763]: time="2025-03-25T01:17:52.902401134Z" level=info msg="StartContainer for \"2be3bef800d3ae0c7e2e024fa65f79d29c3ed11d23d745c63d850c991f9b5c57\" returns successfully" Mar 25 01:17:52.924317 containerd[1763]: time="2025-03-25T01:17:52.924261055Z" level=info msg="CreateContainer within sandbox \"8c80ecc2db48fef7de9bca96b219fed45f564126689cfa072f59328206d49a4d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"229b6381443a02cc1acdcdbda013ee2a1ad724683c6ab2babef81b509d7a8bd1\"" Mar 25 01:17:52.924852 containerd[1763]: time="2025-03-25T01:17:52.924824574Z" level=info msg="StartContainer for \"229b6381443a02cc1acdcdbda013ee2a1ad724683c6ab2babef81b509d7a8bd1\"" Mar 25 01:17:52.927187 containerd[1763]: time="2025-03-25T01:17:52.925812533Z" level=info msg="connecting to shim 229b6381443a02cc1acdcdbda013ee2a1ad724683c6ab2babef81b509d7a8bd1" address="unix:///run/containerd/s/06c558221b6afd5d4c64f34afc42c2326e5f1b6b7a2ef1b7f43f1e069bfbf38a" protocol=ttrpc version=3 Mar 25 01:17:52.947456 containerd[1763]: time="2025-03-25T01:17:52.946129097Z" level=info msg="CreateContainer within sandbox \"77b744f41c3cf6341701a2a2189dd39591858b7085f339e7bf3f275077e7cf0e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b3fc37584febb053396d4535ca19dc359ed9372490bc81822e83fd860fd0e3bb\"" Mar 25 01:17:52.946594 systemd[1]: Started cri-containerd-229b6381443a02cc1acdcdbda013ee2a1ad724683c6ab2babef81b509d7a8bd1.scope - libcontainer container 229b6381443a02cc1acdcdbda013ee2a1ad724683c6ab2babef81b509d7a8bd1. Mar 25 01:17:52.948160 containerd[1763]: time="2025-03-25T01:17:52.948125853Z" level=info msg="StartContainer for \"b3fc37584febb053396d4535ca19dc359ed9372490bc81822e83fd860fd0e3bb\"" Mar 25 01:17:52.949450 containerd[1763]: time="2025-03-25T01:17:52.949120252Z" level=info msg="connecting to shim b3fc37584febb053396d4535ca19dc359ed9372490bc81822e83fd860fd0e3bb" address="unix:///run/containerd/s/bf4008d30dddcac410d6b4e5ae3ac9f8f3842561123fd535f0d2436cbf7f562c" protocol=ttrpc version=3 Mar 25 01:17:52.972584 systemd[1]: Started cri-containerd-b3fc37584febb053396d4535ca19dc359ed9372490bc81822e83fd860fd0e3bb.scope - libcontainer container b3fc37584febb053396d4535ca19dc359ed9372490bc81822e83fd860fd0e3bb. Mar 25 01:17:53.027031 containerd[1763]: time="2025-03-25T01:17:53.026860755Z" level=info msg="StartContainer for \"229b6381443a02cc1acdcdbda013ee2a1ad724683c6ab2babef81b509d7a8bd1\" returns successfully" Mar 25 01:17:53.040469 containerd[1763]: time="2025-03-25T01:17:53.039415293Z" level=info msg="StartContainer for \"b3fc37584febb053396d4535ca19dc359ed9372490bc81822e83fd860fd0e3bb\" returns successfully" Mar 25 01:17:53.865988 kubelet[2979]: I0325 01:17:53.865955 2979 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-a-be6d65597e" Mar 25 01:17:55.238266 kubelet[2979]: E0325 01:17:55.238217 2979 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284.0.0-a-be6d65597e\" not found" node="ci-4284.0.0-a-be6d65597e" Mar 25 01:17:55.322330 kubelet[2979]: E0325 01:17:55.322223 2979 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4284.0.0-a-be6d65597e.182fe6e95b1bb86c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-a-be6d65597e,UID:ci-4284.0.0-a-be6d65597e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-a-be6d65597e,},FirstTimestamp:2025-03-25 01:17:50.745983084 +0000 UTC m=+0.700649050,LastTimestamp:2025-03-25 01:17:50.745983084 +0000 UTC m=+0.700649050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-a-be6d65597e,}" Mar 25 01:17:55.380214 kubelet[2979]: I0325 01:17:55.380174 2979 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284.0.0-a-be6d65597e" Mar 25 01:17:55.388522 kubelet[2979]: E0325 01:17:55.387795 2979 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4284.0.0-a-be6d65597e.182fe6e95c811b83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-a-be6d65597e,UID:ci-4284.0.0-a-be6d65597e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-a-be6d65597e,},FirstTimestamp:2025-03-25 01:17:50.769404803 +0000 UTC m=+0.724070769,LastTimestamp:2025-03-25 01:17:50.769404803 +0000 UTC m=+0.724070769,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-a-be6d65597e,}" Mar 25 01:17:55.476516 kubelet[2979]: E0325 01:17:55.476393 2979 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4284.0.0-a-be6d65597e.182fe6e95cc6d963 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-a-be6d65597e,UID:ci-4284.0.0-a-be6d65597e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4284.0.0-a-be6d65597e status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-a-be6d65597e,},FirstTimestamp:2025-03-25 01:17:50.773975395 +0000 UTC m=+0.728641361,LastTimestamp:2025-03-25 01:17:50.773975395 +0000 UTC m=+0.728641361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-a-be6d65597e,}" Mar 25 01:17:55.624155 kubelet[2979]: E0325 01:17:55.623962 2979 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4284.0.0-a-be6d65597e.182fe6e95cc70843 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-a-be6d65597e,UID:ci-4284.0.0-a-be6d65597e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ci-4284.0.0-a-be6d65597e status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-a-be6d65597e,},FirstTimestamp:2025-03-25 01:17:50.773987395 +0000 UTC m=+0.728653361,LastTimestamp:2025-03-25 01:17:50.773987395 +0000 UTC m=+0.728653361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-a-be6d65597e,}" Mar 25 01:17:55.749040 kubelet[2979]: I0325 01:17:55.748815 2979 apiserver.go:52] "Watching apiserver" Mar 25 01:17:55.756366 kubelet[2979]: I0325 01:17:55.756332 2979 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 25 01:17:55.831793 kubelet[2979]: E0325 01:17:55.831323 2979 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284.0.0-a-be6d65597e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:57.438308 kubelet[2979]: W0325 01:17:57.438085 2979 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:17:57.483628 systemd[1]: Reload requested from client PID 3251 ('systemctl') (unit session-9.scope)... Mar 25 01:17:57.483642 systemd[1]: Reloading... Mar 25 01:17:57.566787 kubelet[2979]: W0325 01:17:57.566228 2979 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:17:57.602619 zram_generator::config[3299]: No configuration found. Mar 25 01:17:57.720849 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:17:57.859002 systemd[1]: Reloading finished in 375 ms. Mar 25 01:17:57.884720 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:17:57.896366 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:17:57.896624 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:17:57.896682 systemd[1]: kubelet.service: Consumed 1.046s CPU time, 113.4M memory peak. Mar 25 01:17:57.898627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:17:58.228762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:17:58.239818 (kubelet)[3362]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:17:58.284156 kubelet[3362]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:17:58.284156 kubelet[3362]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 01:17:58.284156 kubelet[3362]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:17:58.285844 kubelet[3362]: I0325 01:17:58.284542 3362 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:17:58.288429 kubelet[3362]: I0325 01:17:58.288323 3362 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 25 01:17:58.288429 kubelet[3362]: I0325 01:17:58.288350 3362 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:17:58.288723 kubelet[3362]: I0325 01:17:58.288697 3362 server.go:927] "Client rotation is on, will bootstrap in background" Mar 25 01:17:58.290085 kubelet[3362]: I0325 01:17:58.290070 3362 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 25 01:17:58.291475 kubelet[3362]: I0325 01:17:58.291373 3362 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:17:58.297814 kubelet[3362]: I0325 01:17:58.297781 3362 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:17:58.298373 kubelet[3362]: I0325 01:17:58.298149 3362 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:17:58.298373 kubelet[3362]: I0325 01:17:58.298177 3362 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-a-be6d65597e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 25 01:17:58.298373 kubelet[3362]: I0325 01:17:58.298323 3362 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:17:58.298373 kubelet[3362]: I0325 01:17:58.298332 3362 container_manager_linux.go:301] "Creating device plugin manager" Mar 25 01:17:58.298588 kubelet[3362]: I0325 01:17:58.298362 3362 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:17:58.298588 kubelet[3362]: I0325 01:17:58.298478 3362 kubelet.go:400] "Attempting to sync node with API server" Mar 25 01:17:58.298588 kubelet[3362]: I0325 01:17:58.298490 3362 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:17:58.298588 kubelet[3362]: I0325 01:17:58.298516 3362 kubelet.go:312] "Adding apiserver pod source" Mar 25 01:17:58.298588 kubelet[3362]: I0325 01:17:58.298530 3362 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:17:58.302587 kubelet[3362]: I0325 01:17:58.302568 3362 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:17:58.302814 kubelet[3362]: I0325 01:17:58.302801 3362 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:17:58.303347 kubelet[3362]: I0325 01:17:58.303332 3362 server.go:1264] "Started kubelet" Mar 25 01:17:58.303582 kubelet[3362]: I0325 01:17:58.303555 3362 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:17:58.303826 kubelet[3362]: I0325 01:17:58.303779 3362 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:17:58.304113 kubelet[3362]: I0325 01:17:58.304087 3362 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:17:58.304292 kubelet[3362]: I0325 01:17:58.304268 3362 server.go:455] "Adding debug handlers to kubelet server" Mar 25 01:17:58.309533 kubelet[3362]: I0325 01:17:58.309109 3362 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:17:58.323487 kubelet[3362]: I0325 01:17:58.321936 3362 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 25 01:17:58.323487 kubelet[3362]: I0325 01:17:58.322313 3362 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 25 01:17:58.323487 kubelet[3362]: I0325 01:17:58.322477 3362 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:17:58.333661 kubelet[3362]: I0325 01:17:58.333551 3362 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:17:58.334486 kubelet[3362]: I0325 01:17:58.334469 3362 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:17:58.334839 kubelet[3362]: I0325 01:17:58.334578 3362 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 01:17:58.334839 kubelet[3362]: I0325 01:17:58.334598 3362 kubelet.go:2337] "Starting kubelet main sync loop" Mar 25 01:17:58.334839 kubelet[3362]: E0325 01:17:58.334637 3362 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:17:58.336359 kubelet[3362]: I0325 01:17:58.336246 3362 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:17:58.340489 kubelet[3362]: I0325 01:17:58.339889 3362 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:17:58.340489 kubelet[3362]: I0325 01:17:58.339911 3362 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:17:58.400553 kubelet[3362]: I0325 01:17:58.400520 3362 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 01:17:58.400553 kubelet[3362]: I0325 01:17:58.400542 3362 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 01:17:58.400553 kubelet[3362]: I0325 01:17:58.400563 3362 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:17:58.400727 kubelet[3362]: I0325 01:17:58.400707 3362 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 25 01:17:58.400750 kubelet[3362]: I0325 01:17:58.400717 3362 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 25 01:17:58.400750 kubelet[3362]: I0325 01:17:58.400735 3362 policy_none.go:49] "None policy: Start" Mar 25 01:17:58.401527 kubelet[3362]: I0325 01:17:58.401511 3362 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 01:17:58.401903 kubelet[3362]: I0325 01:17:58.401645 3362 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:17:58.401903 kubelet[3362]: I0325 01:17:58.401827 3362 state_mem.go:75] "Updated machine memory state" Mar 25 01:17:58.405900 kubelet[3362]: I0325 01:17:58.405881 3362 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:17:58.406794 kubelet[3362]: I0325 01:17:58.406380 3362 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:17:58.406794 kubelet[3362]: I0325 01:17:58.406495 3362 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:17:58.426566 kubelet[3362]: I0325 01:17:58.426538 3362 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-a-be6d65597e" Mar 25 01:17:58.435912 kubelet[3362]: I0325 01:17:58.435533 3362 topology_manager.go:215] "Topology Admit Handler" podUID="eaa3275617f36b79a25afb0765e9a484" podNamespace="kube-system" podName="kube-apiserver-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:58.435912 kubelet[3362]: I0325 01:17:58.435654 3362 topology_manager.go:215] "Topology Admit Handler" podUID="83336740cceec92172d547b0ccf85257" podNamespace="kube-system" podName="kube-controller-manager-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:58.435912 kubelet[3362]: I0325 01:17:58.435690 3362 topology_manager.go:215] "Topology Admit Handler" podUID="d78cc89c7011463362132633b985ba1f" podNamespace="kube-system" podName="kube-scheduler-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:58.521858 kubelet[3362]: W0325 01:17:58.521257 3362 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:17:58.522316 kubelet[3362]: W0325 01:17:58.522215 3362 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:17:58.522316 kubelet[3362]: E0325 01:17:58.522268 3362 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4284.0.0-a-be6d65597e\" already exists" pod="kube-system/kube-controller-manager-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:58.524057 kubelet[3362]: W0325 01:17:58.524034 3362 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:17:58.524149 kubelet[3362]: E0325 01:17:58.524078 3362 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4284.0.0-a-be6d65597e\" already exists" pod="kube-system/kube-scheduler-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:58.527985 kubelet[3362]: I0325 01:17:58.527946 3362 kubelet_node_status.go:112] "Node was previously registered" node="ci-4284.0.0-a-be6d65597e" Mar 25 01:17:58.528157 kubelet[3362]: I0325 01:17:58.528109 3362 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284.0.0-a-be6d65597e" Mar 25 01:17:58.576934 sudo[3392]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 25 01:17:58.577193 sudo[3392]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 25 01:17:58.623816 kubelet[3362]: I0325 01:17:58.623537 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/83336740cceec92172d547b0ccf85257-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-a-be6d65597e\" (UID: \"83336740cceec92172d547b0ccf85257\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:58.623816 kubelet[3362]: I0325 01:17:58.623578 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/83336740cceec92172d547b0ccf85257-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-a-be6d65597e\" (UID: \"83336740cceec92172d547b0ccf85257\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:58.623816 kubelet[3362]: I0325 01:17:58.623599 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eaa3275617f36b79a25afb0765e9a484-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-a-be6d65597e\" (UID: \"eaa3275617f36b79a25afb0765e9a484\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:58.623816 kubelet[3362]: I0325 01:17:58.623616 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eaa3275617f36b79a25afb0765e9a484-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-a-be6d65597e\" (UID: \"eaa3275617f36b79a25afb0765e9a484\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:58.623816 kubelet[3362]: I0325 01:17:58.623643 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eaa3275617f36b79a25afb0765e9a484-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-a-be6d65597e\" (UID: \"eaa3275617f36b79a25afb0765e9a484\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:58.624041 kubelet[3362]: I0325 01:17:58.623664 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83336740cceec92172d547b0ccf85257-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-a-be6d65597e\" (UID: \"83336740cceec92172d547b0ccf85257\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:58.624041 kubelet[3362]: I0325 01:17:58.623679 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83336740cceec92172d547b0ccf85257-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-a-be6d65597e\" (UID: \"83336740cceec92172d547b0ccf85257\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:58.624041 kubelet[3362]: I0325 01:17:58.623695 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83336740cceec92172d547b0ccf85257-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-a-be6d65597e\" (UID: \"83336740cceec92172d547b0ccf85257\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:58.624041 kubelet[3362]: I0325 01:17:58.623712 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d78cc89c7011463362132633b985ba1f-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-a-be6d65597e\" (UID: \"d78cc89c7011463362132633b985ba1f\") " pod="kube-system/kube-scheduler-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:59.014296 sudo[3392]: pam_unix(sudo:session): session closed for user root Mar 25 01:17:59.301389 kubelet[3362]: I0325 01:17:59.300237 3362 apiserver.go:52] "Watching apiserver" Mar 25 01:17:59.323369 kubelet[3362]: I0325 01:17:59.323314 3362 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 25 01:17:59.401738 kubelet[3362]: W0325 01:17:59.401672 3362 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:17:59.403469 kubelet[3362]: E0325 01:17:59.402273 3362 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4284.0.0-a-be6d65597e\" already exists" pod="kube-system/kube-scheduler-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:59.403469 kubelet[3362]: W0325 01:17:59.402494 3362 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:17:59.403469 kubelet[3362]: E0325 01:17:59.402527 3362 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284.0.0-a-be6d65597e\" already exists" pod="kube-system/kube-apiserver-ci-4284.0.0-a-be6d65597e" Mar 25 01:17:59.426652 kubelet[3362]: I0325 01:17:59.426585 3362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284.0.0-a-be6d65597e" podStartSLOduration=2.426555694 podStartE2EDuration="2.426555694s" podCreationTimestamp="2025-03-25 01:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:17:59.425579255 +0000 UTC m=+1.182800742" watchObservedRunningTime="2025-03-25 01:17:59.426555694 +0000 UTC m=+1.183777221" Mar 25 01:17:59.474587 kubelet[3362]: I0325 01:17:59.474546 3362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284.0.0-a-be6d65597e" podStartSLOduration=1.47451176 podStartE2EDuration="1.47451176s" podCreationTimestamp="2025-03-25 01:17:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:17:59.43891479 +0000 UTC m=+1.196136317" watchObservedRunningTime="2025-03-25 01:17:59.47451176 +0000 UTC m=+1.231733287" Mar 25 01:17:59.474908 kubelet[3362]: I0325 01:17:59.474837 3362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284.0.0-a-be6d65597e" podStartSLOduration=2.4748156 podStartE2EDuration="2.4748156s" podCreationTimestamp="2025-03-25 01:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:17:59.474397641 +0000 UTC m=+1.231619168" watchObservedRunningTime="2025-03-25 01:17:59.4748156 +0000 UTC m=+1.232037127" Mar 25 01:18:00.686409 sudo[2263]: pam_unix(sudo:session): session closed for user root Mar 25 01:18:00.774352 sshd[2262]: Connection closed by 10.200.16.10 port 44290 Mar 25 01:18:00.774892 sshd-session[2260]: pam_unix(sshd:session): session closed for user core Mar 25 01:18:00.778699 systemd[1]: sshd@6-10.200.20.47:22-10.200.16.10:44290.service: Deactivated successfully. Mar 25 01:18:00.781719 systemd[1]: session-9.scope: Deactivated successfully. Mar 25 01:18:00.782268 systemd[1]: session-9.scope: Consumed 6.678s CPU time, 284.5M memory peak. Mar 25 01:18:00.783802 systemd-logind[1741]: Session 9 logged out. Waiting for processes to exit. Mar 25 01:18:00.785958 systemd-logind[1741]: Removed session 9. Mar 25 01:18:13.004948 kubelet[3362]: I0325 01:18:13.004720 3362 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 25 01:18:13.005270 containerd[1763]: time="2025-03-25T01:18:13.004999652Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 25 01:18:13.005594 kubelet[3362]: I0325 01:18:13.005573 3362 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 25 01:18:14.261368 kubelet[3362]: I0325 01:18:14.259985 3362 topology_manager.go:215] "Topology Admit Handler" podUID="ca84a7ce-c903-49df-b7c1-442169663769" podNamespace="kube-system" podName="kube-proxy-bdh9q" Mar 25 01:18:14.263099 kubelet[3362]: I0325 01:18:14.263067 3362 topology_manager.go:215] "Topology Admit Handler" podUID="47896f08-5b5f-4441-b058-69942e254e71" podNamespace="kube-system" podName="cilium-5qstb" Mar 25 01:18:14.271534 systemd[1]: Created slice kubepods-besteffort-podca84a7ce_c903_49df_b7c1_442169663769.slice - libcontainer container kubepods-besteffort-podca84a7ce_c903_49df_b7c1_442169663769.slice. Mar 25 01:18:14.283152 systemd[1]: Created slice kubepods-burstable-pod47896f08_5b5f_4441_b058_69942e254e71.slice - libcontainer container kubepods-burstable-pod47896f08_5b5f_4441_b058_69942e254e71.slice. Mar 25 01:18:14.305669 kubelet[3362]: I0325 01:18:14.305634 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-lib-modules\") pod \"cilium-5qstb\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " pod="kube-system/cilium-5qstb" Mar 25 01:18:14.306203 kubelet[3362]: I0325 01:18:14.305839 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47896f08-5b5f-4441-b058-69942e254e71-clustermesh-secrets\") pod \"cilium-5qstb\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " pod="kube-system/cilium-5qstb" Mar 25 01:18:14.306203 kubelet[3362]: I0325 01:18:14.305865 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca84a7ce-c903-49df-b7c1-442169663769-lib-modules\") pod \"kube-proxy-bdh9q\" (UID: \"ca84a7ce-c903-49df-b7c1-442169663769\") " pod="kube-system/kube-proxy-bdh9q" Mar 25 01:18:14.306203 kubelet[3362]: I0325 01:18:14.305884 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-cilium-cgroup\") pod \"cilium-5qstb\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " pod="kube-system/cilium-5qstb" Mar 25 01:18:14.306203 kubelet[3362]: I0325 01:18:14.305929 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h4nn\" (UniqueName: \"kubernetes.io/projected/47896f08-5b5f-4441-b058-69942e254e71-kube-api-access-4h4nn\") pod \"cilium-5qstb\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " pod="kube-system/cilium-5qstb" Mar 25 01:18:14.306203 kubelet[3362]: I0325 01:18:14.305950 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-hostproc\") pod \"cilium-5qstb\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " pod="kube-system/cilium-5qstb" Mar 25 01:18:14.306203 kubelet[3362]: I0325 01:18:14.305967 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-cni-path\") pod \"cilium-5qstb\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " pod="kube-system/cilium-5qstb" Mar 25 01:18:14.306353 kubelet[3362]: I0325 01:18:14.305984 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca84a7ce-c903-49df-b7c1-442169663769-xtables-lock\") pod \"kube-proxy-bdh9q\" (UID: \"ca84a7ce-c903-49df-b7c1-442169663769\") " pod="kube-system/kube-proxy-bdh9q" Mar 25 01:18:14.306353 kubelet[3362]: I0325 01:18:14.305999 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-xtables-lock\") pod \"cilium-5qstb\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " pod="kube-system/cilium-5qstb" Mar 25 01:18:14.306353 kubelet[3362]: I0325 01:18:14.306015 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47896f08-5b5f-4441-b058-69942e254e71-hubble-tls\") pod \"cilium-5qstb\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " pod="kube-system/cilium-5qstb" Mar 25 01:18:14.306353 kubelet[3362]: I0325 01:18:14.306031 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca84a7ce-c903-49df-b7c1-442169663769-kube-proxy\") pod \"kube-proxy-bdh9q\" (UID: \"ca84a7ce-c903-49df-b7c1-442169663769\") " pod="kube-system/kube-proxy-bdh9q" Mar 25 01:18:14.306353 kubelet[3362]: I0325 01:18:14.306045 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-cilium-run\") pod \"cilium-5qstb\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " pod="kube-system/cilium-5qstb" Mar 25 01:18:14.306353 kubelet[3362]: I0325 01:18:14.306064 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-bpf-maps\") pod \"cilium-5qstb\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " pod="kube-system/cilium-5qstb" Mar 25 01:18:14.306492 kubelet[3362]: I0325 01:18:14.306082 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-host-proc-sys-net\") pod \"cilium-5qstb\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " pod="kube-system/cilium-5qstb" Mar 25 01:18:14.306492 kubelet[3362]: I0325 01:18:14.306102 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-host-proc-sys-kernel\") pod \"cilium-5qstb\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " pod="kube-system/cilium-5qstb" Mar 25 01:18:14.306492 kubelet[3362]: I0325 01:18:14.306117 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-etc-cni-netd\") pod \"cilium-5qstb\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " pod="kube-system/cilium-5qstb" Mar 25 01:18:14.306492 kubelet[3362]: I0325 01:18:14.306131 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47896f08-5b5f-4441-b058-69942e254e71-cilium-config-path\") pod \"cilium-5qstb\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " pod="kube-system/cilium-5qstb" Mar 25 01:18:14.306492 kubelet[3362]: I0325 01:18:14.306146 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rrp7\" (UniqueName: \"kubernetes.io/projected/ca84a7ce-c903-49df-b7c1-442169663769-kube-api-access-6rrp7\") pod \"kube-proxy-bdh9q\" (UID: \"ca84a7ce-c903-49df-b7c1-442169663769\") " pod="kube-system/kube-proxy-bdh9q" Mar 25 01:18:14.315340 kubelet[3362]: I0325 01:18:14.314798 3362 topology_manager.go:215] "Topology Admit Handler" podUID="d5892ad3-b002-4ad4-b05f-81f696953085" podNamespace="kube-system" podName="cilium-operator-599987898-pm5jz" Mar 25 01:18:14.324551 systemd[1]: Created slice kubepods-besteffort-podd5892ad3_b002_4ad4_b05f_81f696953085.slice - libcontainer container kubepods-besteffort-podd5892ad3_b002_4ad4_b05f_81f696953085.slice. Mar 25 01:18:14.407100 kubelet[3362]: I0325 01:18:14.406977 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5892ad3-b002-4ad4-b05f-81f696953085-cilium-config-path\") pod \"cilium-operator-599987898-pm5jz\" (UID: \"d5892ad3-b002-4ad4-b05f-81f696953085\") " pod="kube-system/cilium-operator-599987898-pm5jz" Mar 25 01:18:14.407570 kubelet[3362]: I0325 01:18:14.407193 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmr2c\" (UniqueName: \"kubernetes.io/projected/d5892ad3-b002-4ad4-b05f-81f696953085-kube-api-access-dmr2c\") pod \"cilium-operator-599987898-pm5jz\" (UID: \"d5892ad3-b002-4ad4-b05f-81f696953085\") " pod="kube-system/cilium-operator-599987898-pm5jz" Mar 25 01:18:14.580434 containerd[1763]: time="2025-03-25T01:18:14.580338653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bdh9q,Uid:ca84a7ce-c903-49df-b7c1-442169663769,Namespace:kube-system,Attempt:0,}" Mar 25 01:18:14.586954 containerd[1763]: time="2025-03-25T01:18:14.586922361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5qstb,Uid:47896f08-5b5f-4441-b058-69942e254e71,Namespace:kube-system,Attempt:0,}" Mar 25 01:18:14.630433 containerd[1763]: time="2025-03-25T01:18:14.630373483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-pm5jz,Uid:d5892ad3-b002-4ad4-b05f-81f696953085,Namespace:kube-system,Attempt:0,}" Mar 25 01:18:14.705822 containerd[1763]: time="2025-03-25T01:18:14.705744507Z" level=info msg="connecting to shim 80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c" address="unix:///run/containerd/s/f8de434f6164d847a661e280cfecba4096feddb04989775db61706d0ac212068" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:18:14.718546 containerd[1763]: time="2025-03-25T01:18:14.718395084Z" level=info msg="connecting to shim 3596d534d750897c3dd4b1f8e83c1b53f371724c23e40d8e0280997eb7393377" address="unix:///run/containerd/s/0c63e05522ce59aedd66da2f111a413d26dcf23b897a10b7b469d3dd77e0569f" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:18:14.725602 systemd[1]: Started cri-containerd-80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c.scope - libcontainer container 80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c. Mar 25 01:18:14.743184 containerd[1763]: time="2025-03-25T01:18:14.743148000Z" level=info msg="connecting to shim 70a0f44371eb183af5de4e69a7fd44cb44125c241eb52df5e6e25c9b904afb6e" address="unix:///run/containerd/s/a2a38eee0c640eb430d2c75f6d056eca4438f526b7f1e7a41ac6bcb7946d90d8" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:18:14.747701 systemd[1]: Started cri-containerd-3596d534d750897c3dd4b1f8e83c1b53f371724c23e40d8e0280997eb7393377.scope - libcontainer container 3596d534d750897c3dd4b1f8e83c1b53f371724c23e40d8e0280997eb7393377. Mar 25 01:18:14.776480 containerd[1763]: time="2025-03-25T01:18:14.776352460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5qstb,Uid:47896f08-5b5f-4441-b058-69942e254e71,Namespace:kube-system,Attempt:0,} returns sandbox id \"80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c\"" Mar 25 01:18:14.779794 containerd[1763]: time="2025-03-25T01:18:14.779650374Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 25 01:18:14.780623 systemd[1]: Started cri-containerd-70a0f44371eb183af5de4e69a7fd44cb44125c241eb52df5e6e25c9b904afb6e.scope - libcontainer container 70a0f44371eb183af5de4e69a7fd44cb44125c241eb52df5e6e25c9b904afb6e. Mar 25 01:18:14.806383 containerd[1763]: time="2025-03-25T01:18:14.806194486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bdh9q,Uid:ca84a7ce-c903-49df-b7c1-442169663769,Namespace:kube-system,Attempt:0,} returns sandbox id \"3596d534d750897c3dd4b1f8e83c1b53f371724c23e40d8e0280997eb7393377\"" Mar 25 01:18:14.812866 containerd[1763]: time="2025-03-25T01:18:14.812816434Z" level=info msg="CreateContainer within sandbox \"3596d534d750897c3dd4b1f8e83c1b53f371724c23e40d8e0280997eb7393377\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 25 01:18:14.843088 containerd[1763]: time="2025-03-25T01:18:14.842934220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-pm5jz,Uid:d5892ad3-b002-4ad4-b05f-81f696953085,Namespace:kube-system,Attempt:0,} returns sandbox id \"70a0f44371eb183af5de4e69a7fd44cb44125c241eb52df5e6e25c9b904afb6e\"" Mar 25 01:18:14.852024 containerd[1763]: time="2025-03-25T01:18:14.851987724Z" level=info msg="Container d6e0a7fdc6fb293d842baee7a7e1d81712c183156bb967a43e4a658f5156e862: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:14.873034 containerd[1763]: time="2025-03-25T01:18:14.872629527Z" level=info msg="CreateContainer within sandbox \"3596d534d750897c3dd4b1f8e83c1b53f371724c23e40d8e0280997eb7393377\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d6e0a7fdc6fb293d842baee7a7e1d81712c183156bb967a43e4a658f5156e862\"" Mar 25 01:18:14.874048 containerd[1763]: time="2025-03-25T01:18:14.873550365Z" level=info msg="StartContainer for \"d6e0a7fdc6fb293d842baee7a7e1d81712c183156bb967a43e4a658f5156e862\"" Mar 25 01:18:14.875475 containerd[1763]: time="2025-03-25T01:18:14.875432842Z" level=info msg="connecting to shim d6e0a7fdc6fb293d842baee7a7e1d81712c183156bb967a43e4a658f5156e862" address="unix:///run/containerd/s/0c63e05522ce59aedd66da2f111a413d26dcf23b897a10b7b469d3dd77e0569f" protocol=ttrpc version=3 Mar 25 01:18:14.892695 systemd[1]: Started cri-containerd-d6e0a7fdc6fb293d842baee7a7e1d81712c183156bb967a43e4a658f5156e862.scope - libcontainer container d6e0a7fdc6fb293d842baee7a7e1d81712c183156bb967a43e4a658f5156e862. Mar 25 01:18:14.930509 containerd[1763]: time="2025-03-25T01:18:14.930434222Z" level=info msg="StartContainer for \"d6e0a7fdc6fb293d842baee7a7e1d81712c183156bb967a43e4a658f5156e862\" returns successfully" Mar 25 01:18:18.349408 kubelet[3362]: I0325 01:18:18.348822 3362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bdh9q" podStartSLOduration=4.348806023 podStartE2EDuration="4.348806023s" podCreationTimestamp="2025-03-25 01:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:18:15.435972032 +0000 UTC m=+17.193193519" watchObservedRunningTime="2025-03-25 01:18:18.348806023 +0000 UTC m=+20.106027550" Mar 25 01:18:21.661467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3091150633.mount: Deactivated successfully. Mar 25 01:18:23.397598 containerd[1763]: time="2025-03-25T01:18:23.397544652Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:23.401289 containerd[1763]: time="2025-03-25T01:18:23.401105046Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 25 01:18:23.408489 containerd[1763]: time="2025-03-25T01:18:23.407760714Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:23.411325 containerd[1763]: time="2025-03-25T01:18:23.411250028Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.631559254s" Mar 25 01:18:23.411325 containerd[1763]: time="2025-03-25T01:18:23.411290268Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 25 01:18:23.412931 containerd[1763]: time="2025-03-25T01:18:23.412596945Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 25 01:18:23.414324 containerd[1763]: time="2025-03-25T01:18:23.414230623Z" level=info msg="CreateContainer within sandbox \"80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 25 01:18:23.447836 containerd[1763]: time="2025-03-25T01:18:23.446842884Z" level=info msg="Container 4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:23.472218 containerd[1763]: time="2025-03-25T01:18:23.472183759Z" level=info msg="CreateContainer within sandbox \"80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1\"" Mar 25 01:18:23.472825 containerd[1763]: time="2025-03-25T01:18:23.472769758Z" level=info msg="StartContainer for \"4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1\"" Mar 25 01:18:23.473781 containerd[1763]: time="2025-03-25T01:18:23.473749597Z" level=info msg="connecting to shim 4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1" address="unix:///run/containerd/s/f8de434f6164d847a661e280cfecba4096feddb04989775db61706d0ac212068" protocol=ttrpc version=3 Mar 25 01:18:23.495605 systemd[1]: Started cri-containerd-4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1.scope - libcontainer container 4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1. Mar 25 01:18:23.526423 containerd[1763]: time="2025-03-25T01:18:23.525506184Z" level=info msg="StartContainer for \"4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1\" returns successfully" Mar 25 01:18:23.531274 systemd[1]: cri-containerd-4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1.scope: Deactivated successfully. Mar 25 01:18:23.535589 containerd[1763]: time="2025-03-25T01:18:23.535543447Z" level=info msg="received exit event container_id:\"4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1\" id:\"4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1\" pid:3763 exited_at:{seconds:1742865503 nanos:534225569}" Mar 25 01:18:23.536740 containerd[1763]: time="2025-03-25T01:18:23.536675084Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1\" id:\"4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1\" pid:3763 exited_at:{seconds:1742865503 nanos:534225569}" Mar 25 01:18:23.555393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1-rootfs.mount: Deactivated successfully. Mar 25 01:18:25.453061 containerd[1763]: time="2025-03-25T01:18:25.453015752Z" level=info msg="CreateContainer within sandbox \"80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 25 01:18:25.508370 containerd[1763]: time="2025-03-25T01:18:25.506135697Z" level=info msg="Container f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:25.508204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233080942.mount: Deactivated successfully. Mar 25 01:18:25.530791 containerd[1763]: time="2025-03-25T01:18:25.530751013Z" level=info msg="CreateContainer within sandbox \"80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4\"" Mar 25 01:18:25.531313 containerd[1763]: time="2025-03-25T01:18:25.531281252Z" level=info msg="StartContainer for \"f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4\"" Mar 25 01:18:25.535127 containerd[1763]: time="2025-03-25T01:18:25.533885248Z" level=info msg="connecting to shim f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4" address="unix:///run/containerd/s/f8de434f6164d847a661e280cfecba4096feddb04989775db61706d0ac212068" protocol=ttrpc version=3 Mar 25 01:18:25.555600 systemd[1]: Started cri-containerd-f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4.scope - libcontainer container f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4. Mar 25 01:18:25.590011 containerd[1763]: time="2025-03-25T01:18:25.589946268Z" level=info msg="StartContainer for \"f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4\" returns successfully" Mar 25 01:18:25.594824 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:18:25.595424 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:18:25.596056 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:18:25.597917 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:18:25.599848 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 01:18:25.600438 containerd[1763]: time="2025-03-25T01:18:25.600257249Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4\" id:\"f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4\" pid:3809 exited_at:{seconds:1742865505 nanos:598797132}" Mar 25 01:18:25.600438 containerd[1763]: time="2025-03-25T01:18:25.600347049Z" level=info msg="received exit event container_id:\"f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4\" id:\"f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4\" pid:3809 exited_at:{seconds:1742865505 nanos:598797132}" Mar 25 01:18:25.601400 systemd[1]: cri-containerd-f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4.scope: Deactivated successfully. Mar 25 01:18:25.620239 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:18:26.459512 containerd[1763]: time="2025-03-25T01:18:26.459461679Z" level=info msg="CreateContainer within sandbox \"80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 25 01:18:26.492464 containerd[1763]: time="2025-03-25T01:18:26.490166104Z" level=info msg="Container 708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:26.507294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4-rootfs.mount: Deactivated successfully. Mar 25 01:18:26.518407 containerd[1763]: time="2025-03-25T01:18:26.518353814Z" level=info msg="CreateContainer within sandbox \"80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8\"" Mar 25 01:18:26.520358 containerd[1763]: time="2025-03-25T01:18:26.519774852Z" level=info msg="StartContainer for \"708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8\"" Mar 25 01:18:26.521119 containerd[1763]: time="2025-03-25T01:18:26.521091289Z" level=info msg="connecting to shim 708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8" address="unix:///run/containerd/s/f8de434f6164d847a661e280cfecba4096feddb04989775db61706d0ac212068" protocol=ttrpc version=3 Mar 25 01:18:26.542659 systemd[1]: Started cri-containerd-708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8.scope - libcontainer container 708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8. Mar 25 01:18:26.574766 systemd[1]: cri-containerd-708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8.scope: Deactivated successfully. Mar 25 01:18:26.580205 containerd[1763]: time="2025-03-25T01:18:26.579657785Z" level=info msg="StartContainer for \"708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8\" returns successfully" Mar 25 01:18:26.580205 containerd[1763]: time="2025-03-25T01:18:26.579763785Z" level=info msg="TaskExit event in podsandbox handler container_id:\"708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8\" id:\"708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8\" pid:3856 exited_at:{seconds:1742865506 nanos:577816588}" Mar 25 01:18:26.580205 containerd[1763]: time="2025-03-25T01:18:26.579834505Z" level=info msg="received exit event container_id:\"708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8\" id:\"708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8\" pid:3856 exited_at:{seconds:1742865506 nanos:577816588}" Mar 25 01:18:26.599957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8-rootfs.mount: Deactivated successfully. Mar 25 01:18:27.091879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1392995295.mount: Deactivated successfully. Mar 25 01:18:27.465282 containerd[1763]: time="2025-03-25T01:18:27.463009540Z" level=info msg="CreateContainer within sandbox \"80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 25 01:18:27.500232 containerd[1763]: time="2025-03-25T01:18:27.500189716Z" level=info msg="Container f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:27.518263 containerd[1763]: time="2025-03-25T01:18:27.518224325Z" level=info msg="CreateContainer within sandbox \"80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238\"" Mar 25 01:18:27.518939 containerd[1763]: time="2025-03-25T01:18:27.518891284Z" level=info msg="StartContainer for \"f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238\"" Mar 25 01:18:27.519761 containerd[1763]: time="2025-03-25T01:18:27.519728482Z" level=info msg="connecting to shim f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238" address="unix:///run/containerd/s/f8de434f6164d847a661e280cfecba4096feddb04989775db61706d0ac212068" protocol=ttrpc version=3 Mar 25 01:18:27.538580 systemd[1]: Started cri-containerd-f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238.scope - libcontainer container f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238. Mar 25 01:18:27.560207 systemd[1]: cri-containerd-f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238.scope: Deactivated successfully. Mar 25 01:18:27.563114 containerd[1763]: time="2025-03-25T01:18:27.563051007Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238\" id:\"f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238\" pid:3903 exited_at:{seconds:1742865507 nanos:560265052}" Mar 25 01:18:27.569894 containerd[1763]: time="2025-03-25T01:18:27.569856755Z" level=info msg="received exit event container_id:\"f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238\" id:\"f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238\" pid:3903 exited_at:{seconds:1742865507 nanos:560265052}" Mar 25 01:18:27.576161 containerd[1763]: time="2025-03-25T01:18:27.576122865Z" level=info msg="StartContainer for \"f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238\" returns successfully" Mar 25 01:18:27.586944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238-rootfs.mount: Deactivated successfully. Mar 25 01:18:28.476256 containerd[1763]: time="2025-03-25T01:18:28.476209508Z" level=info msg="CreateContainer within sandbox \"80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 25 01:18:28.518155 containerd[1763]: time="2025-03-25T01:18:28.516984678Z" level=info msg="Container 4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:28.522300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount599654485.mount: Deactivated successfully. Mar 25 01:18:28.538007 containerd[1763]: time="2025-03-25T01:18:28.537965561Z" level=info msg="CreateContainer within sandbox \"80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\"" Mar 25 01:18:28.538877 containerd[1763]: time="2025-03-25T01:18:28.538839920Z" level=info msg="StartContainer for \"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\"" Mar 25 01:18:28.543617 containerd[1763]: time="2025-03-25T01:18:28.543511472Z" level=info msg="connecting to shim 4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4" address="unix:///run/containerd/s/f8de434f6164d847a661e280cfecba4096feddb04989775db61706d0ac212068" protocol=ttrpc version=3 Mar 25 01:18:28.571790 systemd[1]: Started cri-containerd-4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4.scope - libcontainer container 4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4. Mar 25 01:18:28.620533 containerd[1763]: time="2025-03-25T01:18:28.620493979Z" level=info msg="StartContainer for \"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\" returns successfully" Mar 25 01:18:28.730970 containerd[1763]: time="2025-03-25T01:18:28.730842348Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\" id:\"ab4a2fa966236734276c9829cb93092df8cde0c6bc3759853304bd70a33209e2\" pid:3978 exited_at:{seconds:1742865508 nanos:730370389}" Mar 25 01:18:28.753906 kubelet[3362]: I0325 01:18:28.753500 3362 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 25 01:18:28.802343 kubelet[3362]: I0325 01:18:28.800484 3362 topology_manager.go:215] "Topology Admit Handler" podUID="b2e90233-4b7c-443a-8bff-99a7ea4d454c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mbqp7" Mar 25 01:18:28.807077 kubelet[3362]: I0325 01:18:28.807001 3362 topology_manager.go:215] "Topology Admit Handler" podUID="ce27ec39-bfb9-4d8c-94d4-49418090edea" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hfwnn" Mar 25 01:18:28.818843 systemd[1]: Created slice kubepods-burstable-podb2e90233_4b7c_443a_8bff_99a7ea4d454c.slice - libcontainer container kubepods-burstable-podb2e90233_4b7c_443a_8bff_99a7ea4d454c.slice. Mar 25 01:18:28.826945 systemd[1]: Created slice kubepods-burstable-podce27ec39_bfb9_4d8c_94d4_49418090edea.slice - libcontainer container kubepods-burstable-podce27ec39_bfb9_4d8c_94d4_49418090edea.slice. Mar 25 01:18:28.856099 containerd[1763]: time="2025-03-25T01:18:28.855909652Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:28.859321 containerd[1763]: time="2025-03-25T01:18:28.859096366Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 25 01:18:28.864199 containerd[1763]: time="2025-03-25T01:18:28.864044638Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:28.869345 containerd[1763]: time="2025-03-25T01:18:28.869317588Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.456679083s" Mar 25 01:18:28.869612 containerd[1763]: time="2025-03-25T01:18:28.869519748Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 25 01:18:28.874251 containerd[1763]: time="2025-03-25T01:18:28.874231740Z" level=info msg="CreateContainer within sandbox \"70a0f44371eb183af5de4e69a7fd44cb44125c241eb52df5e6e25c9b904afb6e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 25 01:18:28.891934 kubelet[3362]: I0325 01:18:28.891884 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdpdw\" (UniqueName: \"kubernetes.io/projected/ce27ec39-bfb9-4d8c-94d4-49418090edea-kube-api-access-jdpdw\") pod \"coredns-7db6d8ff4d-hfwnn\" (UID: \"ce27ec39-bfb9-4d8c-94d4-49418090edea\") " pod="kube-system/coredns-7db6d8ff4d-hfwnn" Mar 25 01:18:28.892559 kubelet[3362]: I0325 01:18:28.892399 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgtfw\" (UniqueName: \"kubernetes.io/projected/b2e90233-4b7c-443a-8bff-99a7ea4d454c-kube-api-access-mgtfw\") pod \"coredns-7db6d8ff4d-mbqp7\" (UID: \"b2e90233-4b7c-443a-8bff-99a7ea4d454c\") " pod="kube-system/coredns-7db6d8ff4d-mbqp7" Mar 25 01:18:28.892559 kubelet[3362]: I0325 01:18:28.892502 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce27ec39-bfb9-4d8c-94d4-49418090edea-config-volume\") pod \"coredns-7db6d8ff4d-hfwnn\" (UID: \"ce27ec39-bfb9-4d8c-94d4-49418090edea\") " pod="kube-system/coredns-7db6d8ff4d-hfwnn" Mar 25 01:18:28.892559 kubelet[3362]: I0325 01:18:28.892526 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2e90233-4b7c-443a-8bff-99a7ea4d454c-config-volume\") pod \"coredns-7db6d8ff4d-mbqp7\" (UID: \"b2e90233-4b7c-443a-8bff-99a7ea4d454c\") " pod="kube-system/coredns-7db6d8ff4d-mbqp7" Mar 25 01:18:28.904893 containerd[1763]: time="2025-03-25T01:18:28.904130608Z" level=info msg="Container 0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:28.925163 containerd[1763]: time="2025-03-25T01:18:28.925116372Z" level=info msg="CreateContainer within sandbox \"70a0f44371eb183af5de4e69a7fd44cb44125c241eb52df5e6e25c9b904afb6e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5\"" Mar 25 01:18:28.927353 containerd[1763]: time="2025-03-25T01:18:28.927127368Z" level=info msg="StartContainer for \"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5\"" Mar 25 01:18:28.928497 containerd[1763]: time="2025-03-25T01:18:28.928409046Z" level=info msg="connecting to shim 0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5" address="unix:///run/containerd/s/a2a38eee0c640eb430d2c75f6d056eca4438f526b7f1e7a41ac6bcb7946d90d8" protocol=ttrpc version=3 Mar 25 01:18:28.949592 systemd[1]: Started cri-containerd-0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5.scope - libcontainer container 0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5. Mar 25 01:18:28.982583 containerd[1763]: time="2025-03-25T01:18:28.982284913Z" level=info msg="StartContainer for \"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5\" returns successfully" Mar 25 01:18:29.125142 containerd[1763]: time="2025-03-25T01:18:29.125081506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mbqp7,Uid:b2e90233-4b7c-443a-8bff-99a7ea4d454c,Namespace:kube-system,Attempt:0,}" Mar 25 01:18:29.132856 containerd[1763]: time="2025-03-25T01:18:29.132697133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hfwnn,Uid:ce27ec39-bfb9-4d8c-94d4-49418090edea,Namespace:kube-system,Attempt:0,}" Mar 25 01:18:29.555982 kubelet[3362]: I0325 01:18:29.555918 3362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-pm5jz" podStartSLOduration=1.528929673 podStartE2EDuration="15.555887041s" podCreationTimestamp="2025-03-25 01:18:14 +0000 UTC" firstStartedPulling="2025-03-25 01:18:14.844562657 +0000 UTC m=+16.601784144" lastFinishedPulling="2025-03-25 01:18:28.871519985 +0000 UTC m=+30.628741512" observedRunningTime="2025-03-25 01:18:29.555604722 +0000 UTC m=+31.312826249" watchObservedRunningTime="2025-03-25 01:18:29.555887041 +0000 UTC m=+31.313108608" Mar 25 01:18:32.913057 systemd-networkd[1487]: cilium_host: Link UP Mar 25 01:18:32.915575 systemd-networkd[1487]: cilium_net: Link UP Mar 25 01:18:32.915824 systemd-networkd[1487]: cilium_net: Gained carrier Mar 25 01:18:32.916032 systemd-networkd[1487]: cilium_host: Gained carrier Mar 25 01:18:32.916162 systemd-networkd[1487]: cilium_net: Gained IPv6LL Mar 25 01:18:32.916305 systemd-networkd[1487]: cilium_host: Gained IPv6LL Mar 25 01:18:33.049328 systemd-networkd[1487]: cilium_vxlan: Link UP Mar 25 01:18:33.049335 systemd-networkd[1487]: cilium_vxlan: Gained carrier Mar 25 01:18:33.329479 kernel: NET: Registered PF_ALG protocol family Mar 25 01:18:34.033620 systemd-networkd[1487]: lxc_health: Link UP Mar 25 01:18:34.040894 systemd-networkd[1487]: lxc_health: Gained carrier Mar 25 01:18:34.192662 kernel: eth0: renamed from tmp3c262 Mar 25 01:18:34.200402 systemd-networkd[1487]: lxc1c53bd853e7e: Link UP Mar 25 01:18:34.203100 systemd-networkd[1487]: lxc1c53bd853e7e: Gained carrier Mar 25 01:18:34.225866 kernel: eth0: renamed from tmp1dca3 Mar 25 01:18:34.231297 systemd-networkd[1487]: lxcd7942273f587: Link UP Mar 25 01:18:34.233686 systemd-networkd[1487]: lxcd7942273f587: Gained carrier Mar 25 01:18:34.611839 kubelet[3362]: I0325 01:18:34.611776 3362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5qstb" podStartSLOduration=11.978106849 podStartE2EDuration="20.611760299s" podCreationTimestamp="2025-03-25 01:18:14 +0000 UTC" firstStartedPulling="2025-03-25 01:18:14.778545176 +0000 UTC m=+16.535766703" lastFinishedPulling="2025-03-25 01:18:23.412198626 +0000 UTC m=+25.169420153" observedRunningTime="2025-03-25 01:18:29.691337687 +0000 UTC m=+31.448559214" watchObservedRunningTime="2025-03-25 01:18:34.611760299 +0000 UTC m=+36.368981826" Mar 25 01:18:34.762637 systemd-networkd[1487]: cilium_vxlan: Gained IPv6LL Mar 25 01:18:35.082606 systemd-networkd[1487]: lxc_health: Gained IPv6LL Mar 25 01:18:35.850643 systemd-networkd[1487]: lxcd7942273f587: Gained IPv6LL Mar 25 01:18:36.170630 systemd-networkd[1487]: lxc1c53bd853e7e: Gained IPv6LL Mar 25 01:18:37.837647 containerd[1763]: time="2025-03-25T01:18:37.837560165Z" level=info msg="connecting to shim 3c26250c3e1c91fc7516966a7052cdbd005b1a757f5162490eff0820fef93e89" address="unix:///run/containerd/s/c9ed54cfc16ca851d25ce39ca2b53fb5762d68f715fb075ecfdfb46101a20c49" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:18:37.850758 containerd[1763]: time="2025-03-25T01:18:37.850718942Z" level=info msg="connecting to shim 1dca3ea67e86922de801a7dbf485de79e458b7355668c5025b36bcf955f41a4a" address="unix:///run/containerd/s/8bba2bb4bd592f2a46b3859222347bf10531adc9b977a4bbe22e568e651a707c" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:18:37.894742 systemd[1]: Started cri-containerd-3c26250c3e1c91fc7516966a7052cdbd005b1a757f5162490eff0820fef93e89.scope - libcontainer container 3c26250c3e1c91fc7516966a7052cdbd005b1a757f5162490eff0820fef93e89. Mar 25 01:18:37.904600 systemd[1]: Started cri-containerd-1dca3ea67e86922de801a7dbf485de79e458b7355668c5025b36bcf955f41a4a.scope - libcontainer container 1dca3ea67e86922de801a7dbf485de79e458b7355668c5025b36bcf955f41a4a. Mar 25 01:18:37.941693 containerd[1763]: time="2025-03-25T01:18:37.941598663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mbqp7,Uid:b2e90233-4b7c-443a-8bff-99a7ea4d454c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c26250c3e1c91fc7516966a7052cdbd005b1a757f5162490eff0820fef93e89\"" Mar 25 01:18:37.945202 containerd[1763]: time="2025-03-25T01:18:37.945164057Z" level=info msg="CreateContainer within sandbox \"3c26250c3e1c91fc7516966a7052cdbd005b1a757f5162490eff0820fef93e89\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:18:37.948924 containerd[1763]: time="2025-03-25T01:18:37.948754211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hfwnn,Uid:ce27ec39-bfb9-4d8c-94d4-49418090edea,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dca3ea67e86922de801a7dbf485de79e458b7355668c5025b36bcf955f41a4a\"" Mar 25 01:18:37.952995 containerd[1763]: time="2025-03-25T01:18:37.952962364Z" level=info msg="CreateContainer within sandbox \"1dca3ea67e86922de801a7dbf485de79e458b7355668c5025b36bcf955f41a4a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:18:38.001348 containerd[1763]: time="2025-03-25T01:18:38.001304799Z" level=info msg="Container a15e511f17af229ac6d760c9cb6bc49419de1e0f3d42a1c9518dc1e410ec0d35: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:38.004477 containerd[1763]: time="2025-03-25T01:18:38.004307714Z" level=info msg="Container f0841fe364504d6ef9b1e621de005b0cb2d57d76b80fd1c1bb8e8f4ce67db6be: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:38.036715 containerd[1763]: time="2025-03-25T01:18:38.036675258Z" level=info msg="CreateContainer within sandbox \"3c26250c3e1c91fc7516966a7052cdbd005b1a757f5162490eff0820fef93e89\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a15e511f17af229ac6d760c9cb6bc49419de1e0f3d42a1c9518dc1e410ec0d35\"" Mar 25 01:18:38.037395 containerd[1763]: time="2025-03-25T01:18:38.037360096Z" level=info msg="StartContainer for \"a15e511f17af229ac6d760c9cb6bc49419de1e0f3d42a1c9518dc1e410ec0d35\"" Mar 25 01:18:38.038326 containerd[1763]: time="2025-03-25T01:18:38.038238455Z" level=info msg="connecting to shim a15e511f17af229ac6d760c9cb6bc49419de1e0f3d42a1c9518dc1e410ec0d35" address="unix:///run/containerd/s/c9ed54cfc16ca851d25ce39ca2b53fb5762d68f715fb075ecfdfb46101a20c49" protocol=ttrpc version=3 Mar 25 01:18:38.043755 containerd[1763]: time="2025-03-25T01:18:38.043659285Z" level=info msg="CreateContainer within sandbox \"1dca3ea67e86922de801a7dbf485de79e458b7355668c5025b36bcf955f41a4a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f0841fe364504d6ef9b1e621de005b0cb2d57d76b80fd1c1bb8e8f4ce67db6be\"" Mar 25 01:18:38.044682 containerd[1763]: time="2025-03-25T01:18:38.044660244Z" level=info msg="StartContainer for \"f0841fe364504d6ef9b1e621de005b0cb2d57d76b80fd1c1bb8e8f4ce67db6be\"" Mar 25 01:18:38.046561 containerd[1763]: time="2025-03-25T01:18:38.046237081Z" level=info msg="connecting to shim f0841fe364504d6ef9b1e621de005b0cb2d57d76b80fd1c1bb8e8f4ce67db6be" address="unix:///run/containerd/s/8bba2bb4bd592f2a46b3859222347bf10531adc9b977a4bbe22e568e651a707c" protocol=ttrpc version=3 Mar 25 01:18:38.060612 systemd[1]: Started cri-containerd-a15e511f17af229ac6d760c9cb6bc49419de1e0f3d42a1c9518dc1e410ec0d35.scope - libcontainer container a15e511f17af229ac6d760c9cb6bc49419de1e0f3d42a1c9518dc1e410ec0d35. Mar 25 01:18:38.070760 systemd[1]: Started cri-containerd-f0841fe364504d6ef9b1e621de005b0cb2d57d76b80fd1c1bb8e8f4ce67db6be.scope - libcontainer container f0841fe364504d6ef9b1e621de005b0cb2d57d76b80fd1c1bb8e8f4ce67db6be. Mar 25 01:18:38.110620 containerd[1763]: time="2025-03-25T01:18:38.110075730Z" level=info msg="StartContainer for \"f0841fe364504d6ef9b1e621de005b0cb2d57d76b80fd1c1bb8e8f4ce67db6be\" returns successfully" Mar 25 01:18:38.112800 containerd[1763]: time="2025-03-25T01:18:38.112736805Z" level=info msg="StartContainer for \"a15e511f17af229ac6d760c9cb6bc49419de1e0f3d42a1c9518dc1e410ec0d35\" returns successfully" Mar 25 01:18:38.534620 kubelet[3362]: I0325 01:18:38.534372 3362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mbqp7" podStartSLOduration=24.53435523 podStartE2EDuration="24.53435523s" podCreationTimestamp="2025-03-25 01:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:18:38.533500351 +0000 UTC m=+40.290721838" watchObservedRunningTime="2025-03-25 01:18:38.53435523 +0000 UTC m=+40.291576797" Mar 25 01:18:38.535953 kubelet[3362]: I0325 01:18:38.535166 3362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hfwnn" podStartSLOduration=24.534599989 podStartE2EDuration="24.534599989s" podCreationTimestamp="2025-03-25 01:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:18:38.516285661 +0000 UTC m=+40.273507228" watchObservedRunningTime="2025-03-25 01:18:38.534599989 +0000 UTC m=+40.291821556" Mar 25 01:18:58.121759 waagent[1989]: 2025-03-25T01:18:58.121706Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Mar 25 01:18:58.128517 waagent[1989]: 2025-03-25T01:18:58.128477Z INFO ExtHandler Mar 25 01:18:58.128612 waagent[1989]: 2025-03-25T01:18:58.128591Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a806e2f6-b531-41fa-999d-25d7449fd2c4 eTag: 6137276049111202786 source: Fabric] Mar 25 01:18:58.128936 waagent[1989]: 2025-03-25T01:18:58.128898Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 25 01:18:58.129495 waagent[1989]: 2025-03-25T01:18:58.129439Z INFO ExtHandler Mar 25 01:18:58.129559 waagent[1989]: 2025-03-25T01:18:58.129534Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Mar 25 01:18:58.207702 waagent[1989]: 2025-03-25T01:18:58.207663Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 25 01:18:58.283260 waagent[1989]: 2025-03-25T01:18:58.282476Z INFO ExtHandler Downloaded certificate {'thumbprint': '7D49523096C26F3475C38D9203F6783CDC37D87E', 'hasPrivateKey': True} Mar 25 01:18:58.283260 waagent[1989]: 2025-03-25T01:18:58.282879Z INFO ExtHandler Downloaded certificate {'thumbprint': '2AFC58028CFC880AB23B2AE3F4EF463A0D44BC85', 'hasPrivateKey': False} Mar 25 01:18:58.283529 waagent[1989]: 2025-03-25T01:18:58.283493Z INFO ExtHandler Fetch goal state completed Mar 25 01:18:58.284042 waagent[1989]: 2025-03-25T01:18:58.284003Z INFO ExtHandler ExtHandler Mar 25 01:18:58.284192 waagent[1989]: 2025-03-25T01:18:58.284166Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 73f66c02-8df0-4478-9b0f-8b89d4df6590 correlation c793f0dd-ba92-4082-a2ea-e12fa2dc509c created: 2025-03-25T01:18:46.599487Z] Mar 25 01:18:58.284642 waagent[1989]: 2025-03-25T01:18:58.284611Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 25 01:18:58.285334 waagent[1989]: 2025-03-25T01:18:58.285304Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 1 ms] Mar 25 01:19:33.820070 systemd[1]: Started sshd@7-10.200.20.47:22-10.200.16.10:43970.service - OpenSSH per-connection server daemon (10.200.16.10:43970). Mar 25 01:19:34.274022 sshd[4679]: Accepted publickey for core from 10.200.16.10 port 43970 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:19:34.275331 sshd-session[4679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:19:34.280213 systemd-logind[1741]: New session 10 of user core. Mar 25 01:19:34.285652 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 25 01:19:34.666310 sshd[4681]: Connection closed by 10.200.16.10 port 43970 Mar 25 01:19:34.666657 sshd-session[4679]: pam_unix(sshd:session): session closed for user core Mar 25 01:19:34.669194 systemd[1]: sshd@7-10.200.20.47:22-10.200.16.10:43970.service: Deactivated successfully. Mar 25 01:19:34.670924 systemd[1]: session-10.scope: Deactivated successfully. Mar 25 01:19:34.672611 systemd-logind[1741]: Session 10 logged out. Waiting for processes to exit. Mar 25 01:19:34.673634 systemd-logind[1741]: Removed session 10. Mar 25 01:19:39.747964 systemd[1]: Started sshd@8-10.200.20.47:22-10.200.16.10:59870.service - OpenSSH per-connection server daemon (10.200.16.10:59870). Mar 25 01:19:40.206725 sshd[4696]: Accepted publickey for core from 10.200.16.10 port 59870 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:19:40.207948 sshd-session[4696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:19:40.212862 systemd-logind[1741]: New session 11 of user core. Mar 25 01:19:40.220571 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 25 01:19:40.587053 sshd[4698]: Connection closed by 10.200.16.10 port 59870 Mar 25 01:19:40.587583 sshd-session[4696]: pam_unix(sshd:session): session closed for user core Mar 25 01:19:40.591142 systemd[1]: sshd@8-10.200.20.47:22-10.200.16.10:59870.service: Deactivated successfully. Mar 25 01:19:40.593024 systemd[1]: session-11.scope: Deactivated successfully. Mar 25 01:19:40.594066 systemd-logind[1741]: Session 11 logged out. Waiting for processes to exit. Mar 25 01:19:40.595046 systemd-logind[1741]: Removed session 11. Mar 25 01:19:45.675425 systemd[1]: Started sshd@9-10.200.20.47:22-10.200.16.10:59876.service - OpenSSH per-connection server daemon (10.200.16.10:59876). Mar 25 01:19:46.165634 sshd[4713]: Accepted publickey for core from 10.200.16.10 port 59876 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:19:46.166934 sshd-session[4713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:19:46.171635 systemd-logind[1741]: New session 12 of user core. Mar 25 01:19:46.178565 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 25 01:19:46.581689 sshd[4715]: Connection closed by 10.200.16.10 port 59876 Mar 25 01:19:46.582220 sshd-session[4713]: pam_unix(sshd:session): session closed for user core Mar 25 01:19:46.585104 systemd[1]: sshd@9-10.200.20.47:22-10.200.16.10:59876.service: Deactivated successfully. Mar 25 01:19:46.586889 systemd[1]: session-12.scope: Deactivated successfully. Mar 25 01:19:46.588572 systemd-logind[1741]: Session 12 logged out. Waiting for processes to exit. Mar 25 01:19:46.589685 systemd-logind[1741]: Removed session 12. Mar 25 01:19:51.670314 systemd[1]: Started sshd@10-10.200.20.47:22-10.200.16.10:55030.service - OpenSSH per-connection server daemon (10.200.16.10:55030). Mar 25 01:19:52.161503 sshd[4728]: Accepted publickey for core from 10.200.16.10 port 55030 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:19:52.162799 sshd-session[4728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:19:52.166773 systemd-logind[1741]: New session 13 of user core. Mar 25 01:19:52.175587 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 25 01:19:52.577759 sshd[4731]: Connection closed by 10.200.16.10 port 55030 Mar 25 01:19:52.578137 sshd-session[4728]: pam_unix(sshd:session): session closed for user core Mar 25 01:19:52.580963 systemd[1]: sshd@10-10.200.20.47:22-10.200.16.10:55030.service: Deactivated successfully. Mar 25 01:19:52.583955 systemd[1]: session-13.scope: Deactivated successfully. Mar 25 01:19:52.585516 systemd-logind[1741]: Session 13 logged out. Waiting for processes to exit. Mar 25 01:19:52.586384 systemd-logind[1741]: Removed session 13. Mar 25 01:19:52.665842 systemd[1]: Started sshd@11-10.200.20.47:22-10.200.16.10:55046.service - OpenSSH per-connection server daemon (10.200.16.10:55046). Mar 25 01:19:53.157660 sshd[4743]: Accepted publickey for core from 10.200.16.10 port 55046 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:19:53.158912 sshd-session[4743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:19:53.163243 systemd-logind[1741]: New session 14 of user core. Mar 25 01:19:53.170594 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 25 01:19:53.608080 sshd[4746]: Connection closed by 10.200.16.10 port 55046 Mar 25 01:19:53.607923 sshd-session[4743]: pam_unix(sshd:session): session closed for user core Mar 25 01:19:53.610742 systemd[1]: sshd@11-10.200.20.47:22-10.200.16.10:55046.service: Deactivated successfully. Mar 25 01:19:53.614041 systemd[1]: session-14.scope: Deactivated successfully. Mar 25 01:19:53.615476 systemd-logind[1741]: Session 14 logged out. Waiting for processes to exit. Mar 25 01:19:53.616867 systemd-logind[1741]: Removed session 14. Mar 25 01:19:53.696259 systemd[1]: Started sshd@12-10.200.20.47:22-10.200.16.10:55050.service - OpenSSH per-connection server daemon (10.200.16.10:55050). Mar 25 01:19:54.150054 sshd[4756]: Accepted publickey for core from 10.200.16.10 port 55050 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:19:54.151679 sshd-session[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:19:54.157046 systemd-logind[1741]: New session 15 of user core. Mar 25 01:19:54.162659 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 25 01:19:54.533540 sshd[4758]: Connection closed by 10.200.16.10 port 55050 Mar 25 01:19:54.534318 sshd-session[4756]: pam_unix(sshd:session): session closed for user core Mar 25 01:19:54.537645 systemd-logind[1741]: Session 15 logged out. Waiting for processes to exit. Mar 25 01:19:54.537832 systemd[1]: sshd@12-10.200.20.47:22-10.200.16.10:55050.service: Deactivated successfully. Mar 25 01:19:54.540059 systemd[1]: session-15.scope: Deactivated successfully. Mar 25 01:19:54.541012 systemd-logind[1741]: Removed session 15. Mar 25 01:19:59.622100 systemd[1]: Started sshd@13-10.200.20.47:22-10.200.16.10:43824.service - OpenSSH per-connection server daemon (10.200.16.10:43824). Mar 25 01:20:00.114593 sshd[4772]: Accepted publickey for core from 10.200.16.10 port 43824 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:00.115877 sshd-session[4772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:00.120644 systemd-logind[1741]: New session 16 of user core. Mar 25 01:20:00.125589 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 25 01:20:00.527383 sshd[4774]: Connection closed by 10.200.16.10 port 43824 Mar 25 01:20:00.528031 sshd-session[4772]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:00.531586 systemd[1]: sshd@13-10.200.20.47:22-10.200.16.10:43824.service: Deactivated successfully. Mar 25 01:20:00.533603 systemd[1]: session-16.scope: Deactivated successfully. Mar 25 01:20:00.534425 systemd-logind[1741]: Session 16 logged out. Waiting for processes to exit. Mar 25 01:20:00.535429 systemd-logind[1741]: Removed session 16. Mar 25 01:20:05.620386 systemd[1]: Started sshd@14-10.200.20.47:22-10.200.16.10:43838.service - OpenSSH per-connection server daemon (10.200.16.10:43838). Mar 25 01:20:06.108651 sshd[4786]: Accepted publickey for core from 10.200.16.10 port 43838 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:06.109942 sshd-session[4786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:06.113958 systemd-logind[1741]: New session 17 of user core. Mar 25 01:20:06.122590 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 25 01:20:06.523818 sshd[4788]: Connection closed by 10.200.16.10 port 43838 Mar 25 01:20:06.524394 sshd-session[4786]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:06.527121 systemd[1]: sshd@14-10.200.20.47:22-10.200.16.10:43838.service: Deactivated successfully. Mar 25 01:20:06.528948 systemd[1]: session-17.scope: Deactivated successfully. Mar 25 01:20:06.530658 systemd-logind[1741]: Session 17 logged out. Waiting for processes to exit. Mar 25 01:20:06.531772 systemd-logind[1741]: Removed session 17. Mar 25 01:20:06.612853 systemd[1]: Started sshd@15-10.200.20.47:22-10.200.16.10:43848.service - OpenSSH per-connection server daemon (10.200.16.10:43848). Mar 25 01:20:07.103840 sshd[4800]: Accepted publickey for core from 10.200.16.10 port 43848 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:07.105126 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:07.110781 systemd-logind[1741]: New session 18 of user core. Mar 25 01:20:07.116595 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 25 01:20:07.558485 sshd[4802]: Connection closed by 10.200.16.10 port 43848 Mar 25 01:20:07.559035 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:07.562428 systemd[1]: sshd@15-10.200.20.47:22-10.200.16.10:43848.service: Deactivated successfully. Mar 25 01:20:07.564164 systemd[1]: session-18.scope: Deactivated successfully. Mar 25 01:20:07.564952 systemd-logind[1741]: Session 18 logged out. Waiting for processes to exit. Mar 25 01:20:07.565977 systemd-logind[1741]: Removed session 18. Mar 25 01:20:07.643679 systemd[1]: Started sshd@16-10.200.20.47:22-10.200.16.10:43864.service - OpenSSH per-connection server daemon (10.200.16.10:43864). Mar 25 01:20:08.096346 sshd[4812]: Accepted publickey for core from 10.200.16.10 port 43864 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:08.097667 sshd-session[4812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:08.101783 systemd-logind[1741]: New session 19 of user core. Mar 25 01:20:08.110655 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 25 01:20:09.839287 sshd[4814]: Connection closed by 10.200.16.10 port 43864 Mar 25 01:20:09.839741 sshd-session[4812]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:09.843308 systemd[1]: sshd@16-10.200.20.47:22-10.200.16.10:43864.service: Deactivated successfully. Mar 25 01:20:09.844960 systemd[1]: session-19.scope: Deactivated successfully. Mar 25 01:20:09.845767 systemd-logind[1741]: Session 19 logged out. Waiting for processes to exit. Mar 25 01:20:09.846886 systemd-logind[1741]: Removed session 19. Mar 25 01:20:09.927808 systemd[1]: Started sshd@17-10.200.20.47:22-10.200.16.10:59096.service - OpenSSH per-connection server daemon (10.200.16.10:59096). Mar 25 01:20:10.426057 sshd[4832]: Accepted publickey for core from 10.200.16.10 port 59096 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:10.427374 sshd-session[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:10.431292 systemd-logind[1741]: New session 20 of user core. Mar 25 01:20:10.436609 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 25 01:20:10.958126 sshd[4834]: Connection closed by 10.200.16.10 port 59096 Mar 25 01:20:10.958821 sshd-session[4832]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:10.962471 systemd-logind[1741]: Session 20 logged out. Waiting for processes to exit. Mar 25 01:20:10.963340 systemd[1]: sshd@17-10.200.20.47:22-10.200.16.10:59096.service: Deactivated successfully. Mar 25 01:20:10.965716 systemd[1]: session-20.scope: Deactivated successfully. Mar 25 01:20:10.967378 systemd-logind[1741]: Removed session 20. Mar 25 01:20:11.050568 systemd[1]: Started sshd@18-10.200.20.47:22-10.200.16.10:59104.service - OpenSSH per-connection server daemon (10.200.16.10:59104). Mar 25 01:20:11.548940 sshd[4844]: Accepted publickey for core from 10.200.16.10 port 59104 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:11.550248 sshd-session[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:11.554976 systemd-logind[1741]: New session 21 of user core. Mar 25 01:20:11.564623 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 25 01:20:11.965337 sshd[4846]: Connection closed by 10.200.16.10 port 59104 Mar 25 01:20:11.965910 sshd-session[4844]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:11.969492 systemd-logind[1741]: Session 21 logged out. Waiting for processes to exit. Mar 25 01:20:11.969847 systemd[1]: sshd@18-10.200.20.47:22-10.200.16.10:59104.service: Deactivated successfully. Mar 25 01:20:11.972874 systemd[1]: session-21.scope: Deactivated successfully. Mar 25 01:20:11.974368 systemd-logind[1741]: Removed session 21. Mar 25 01:20:17.051651 systemd[1]: Started sshd@19-10.200.20.47:22-10.200.16.10:59118.service - OpenSSH per-connection server daemon (10.200.16.10:59118). Mar 25 01:20:17.508117 sshd[4864]: Accepted publickey for core from 10.200.16.10 port 59118 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:17.509433 sshd-session[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:17.514527 systemd-logind[1741]: New session 22 of user core. Mar 25 01:20:17.520606 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 25 01:20:17.903476 sshd[4866]: Connection closed by 10.200.16.10 port 59118 Mar 25 01:20:17.903941 sshd-session[4864]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:17.907089 systemd[1]: sshd@19-10.200.20.47:22-10.200.16.10:59118.service: Deactivated successfully. Mar 25 01:20:17.909689 systemd[1]: session-22.scope: Deactivated successfully. Mar 25 01:20:17.910774 systemd-logind[1741]: Session 22 logged out. Waiting for processes to exit. Mar 25 01:20:17.911781 systemd-logind[1741]: Removed session 22. Mar 25 01:20:22.993043 systemd[1]: Started sshd@20-10.200.20.47:22-10.200.16.10:41888.service - OpenSSH per-connection server daemon (10.200.16.10:41888). Mar 25 01:20:23.485557 sshd[4879]: Accepted publickey for core from 10.200.16.10 port 41888 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:23.487047 sshd-session[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:23.491546 systemd-logind[1741]: New session 23 of user core. Mar 25 01:20:23.495601 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 25 01:20:23.899979 sshd[4881]: Connection closed by 10.200.16.10 port 41888 Mar 25 01:20:23.900521 sshd-session[4879]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:23.904017 systemd-logind[1741]: Session 23 logged out. Waiting for processes to exit. Mar 25 01:20:23.904766 systemd[1]: sshd@20-10.200.20.47:22-10.200.16.10:41888.service: Deactivated successfully. Mar 25 01:20:23.906767 systemd[1]: session-23.scope: Deactivated successfully. Mar 25 01:20:23.907908 systemd-logind[1741]: Removed session 23. Mar 25 01:20:28.992902 systemd[1]: Started sshd@21-10.200.20.47:22-10.200.16.10:53028.service - OpenSSH per-connection server daemon (10.200.16.10:53028). Mar 25 01:20:29.480909 sshd[4892]: Accepted publickey for core from 10.200.16.10 port 53028 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:29.482256 sshd-session[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:29.486505 systemd-logind[1741]: New session 24 of user core. Mar 25 01:20:29.491659 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 25 01:20:29.897710 sshd[4894]: Connection closed by 10.200.16.10 port 53028 Mar 25 01:20:29.898233 sshd-session[4892]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:29.902666 systemd[1]: sshd@21-10.200.20.47:22-10.200.16.10:53028.service: Deactivated successfully. Mar 25 01:20:29.906112 systemd[1]: session-24.scope: Deactivated successfully. Mar 25 01:20:29.906989 systemd-logind[1741]: Session 24 logged out. Waiting for processes to exit. Mar 25 01:20:29.907957 systemd-logind[1741]: Removed session 24. Mar 25 01:20:29.986761 systemd[1]: Started sshd@22-10.200.20.47:22-10.200.16.10:53036.service - OpenSSH per-connection server daemon (10.200.16.10:53036). Mar 25 01:20:30.483889 sshd[4906]: Accepted publickey for core from 10.200.16.10 port 53036 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:30.485196 sshd-session[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:30.489787 systemd-logind[1741]: New session 25 of user core. Mar 25 01:20:30.495611 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 25 01:20:32.690039 containerd[1763]: time="2025-03-25T01:20:32.689826119Z" level=info msg="StopContainer for \"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5\" with timeout 30 (s)" Mar 25 01:20:32.690724 containerd[1763]: time="2025-03-25T01:20:32.690689750Z" level=info msg="Stop container \"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5\" with signal terminated" Mar 25 01:20:32.705105 containerd[1763]: time="2025-03-25T01:20:32.705005004Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:20:32.705813 systemd[1]: cri-containerd-0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5.scope: Deactivated successfully. Mar 25 01:20:32.711288 containerd[1763]: time="2025-03-25T01:20:32.710579946Z" level=info msg="received exit event container_id:\"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5\" id:\"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5\" pid:4032 exited_at:{seconds:1742865632 nanos:709727835}" Mar 25 01:20:32.711288 containerd[1763]: time="2025-03-25T01:20:32.710875063Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5\" id:\"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5\" pid:4032 exited_at:{seconds:1742865632 nanos:709727835}" Mar 25 01:20:32.713009 containerd[1763]: time="2025-03-25T01:20:32.712873603Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\" id:\"c3cdefb2661d14342be30ffff2d7adf2ebdff461b378ae60435a6e020b5585d5\" pid:4926 exited_at:{seconds:1742865632 nanos:712351048}" Mar 25 01:20:32.716310 containerd[1763]: time="2025-03-25T01:20:32.716281648Z" level=info msg="StopContainer for \"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\" with timeout 2 (s)" Mar 25 01:20:32.717148 containerd[1763]: time="2025-03-25T01:20:32.717069400Z" level=info msg="Stop container \"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\" with signal terminated" Mar 25 01:20:32.726363 systemd-networkd[1487]: lxc_health: Link DOWN Mar 25 01:20:32.726370 systemd-networkd[1487]: lxc_health: Lost carrier Mar 25 01:20:32.740745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5-rootfs.mount: Deactivated successfully. Mar 25 01:20:32.744090 systemd[1]: cri-containerd-4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4.scope: Deactivated successfully. Mar 25 01:20:32.745170 systemd[1]: cri-containerd-4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4.scope: Consumed 6.051s CPU time, 125M memory peak, 152K read from disk, 12.9M written to disk. Mar 25 01:20:32.745459 containerd[1763]: time="2025-03-25T01:20:32.745325630Z" level=info msg="received exit event container_id:\"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\" id:\"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\" pid:3945 exited_at:{seconds:1742865632 nanos:744966474}" Mar 25 01:20:32.746074 containerd[1763]: time="2025-03-25T01:20:32.746050103Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\" id:\"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\" pid:3945 exited_at:{seconds:1742865632 nanos:744966474}" Mar 25 01:20:32.763680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4-rootfs.mount: Deactivated successfully. Mar 25 01:20:32.837649 containerd[1763]: time="2025-03-25T01:20:32.837520855Z" level=info msg="StopContainer for \"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\" returns successfully" Mar 25 01:20:32.838883 containerd[1763]: time="2025-03-25T01:20:32.838835933Z" level=info msg="StopPodSandbox for \"80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c\"" Mar 25 01:20:32.838984 containerd[1763]: time="2025-03-25T01:20:32.838896453Z" level=info msg="Container to stop \"f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:20:32.838984 containerd[1763]: time="2025-03-25T01:20:32.838908733Z" level=info msg="Container to stop \"f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:20:32.838984 containerd[1763]: time="2025-03-25T01:20:32.838917093Z" level=info msg="Container to stop \"708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:20:32.838984 containerd[1763]: time="2025-03-25T01:20:32.838927413Z" level=info msg="Container to stop \"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:20:32.838984 containerd[1763]: time="2025-03-25T01:20:32.838936853Z" level=info msg="Container to stop \"4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:20:32.840197 containerd[1763]: time="2025-03-25T01:20:32.840111811Z" level=info msg="StopContainer for \"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5\" returns successfully" Mar 25 01:20:32.841226 containerd[1763]: time="2025-03-25T01:20:32.841193129Z" level=info msg="StopPodSandbox for \"70a0f44371eb183af5de4e69a7fd44cb44125c241eb52df5e6e25c9b904afb6e\"" Mar 25 01:20:32.841303 containerd[1763]: time="2025-03-25T01:20:32.841257009Z" level=info msg="Container to stop \"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:20:32.846832 systemd[1]: cri-containerd-80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c.scope: Deactivated successfully. Mar 25 01:20:32.848229 systemd[1]: cri-containerd-70a0f44371eb183af5de4e69a7fd44cb44125c241eb52df5e6e25c9b904afb6e.scope: Deactivated successfully. Mar 25 01:20:32.851023 containerd[1763]: time="2025-03-25T01:20:32.850971072Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70a0f44371eb183af5de4e69a7fd44cb44125c241eb52df5e6e25c9b904afb6e\" id:\"70a0f44371eb183af5de4e69a7fd44cb44125c241eb52df5e6e25c9b904afb6e\" pid:3550 exit_status:137 exited_at:{seconds:1742865632 nanos:849814634}" Mar 25 01:20:32.880350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c-rootfs.mount: Deactivated successfully. Mar 25 01:20:32.887591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70a0f44371eb183af5de4e69a7fd44cb44125c241eb52df5e6e25c9b904afb6e-rootfs.mount: Deactivated successfully. Mar 25 01:20:32.920067 containerd[1763]: time="2025-03-25T01:20:32.919873354Z" level=info msg="shim disconnected" id=80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c namespace=k8s.io Mar 25 01:20:32.920067 containerd[1763]: time="2025-03-25T01:20:32.919904594Z" level=warning msg="cleaning up after shim disconnected" id=80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c namespace=k8s.io Mar 25 01:20:32.920067 containerd[1763]: time="2025-03-25T01:20:32.919933314Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:20:32.920067 containerd[1763]: time="2025-03-25T01:20:32.920012354Z" level=info msg="shim disconnected" id=70a0f44371eb183af5de4e69a7fd44cb44125c241eb52df5e6e25c9b904afb6e namespace=k8s.io Mar 25 01:20:32.920482 containerd[1763]: time="2025-03-25T01:20:32.920191114Z" level=warning msg="cleaning up after shim disconnected" id=70a0f44371eb183af5de4e69a7fd44cb44125c241eb52df5e6e25c9b904afb6e namespace=k8s.io Mar 25 01:20:32.920482 containerd[1763]: time="2025-03-25T01:20:32.920387673Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:20:32.932488 containerd[1763]: time="2025-03-25T01:20:32.932302173Z" level=info msg="received exit event sandbox_id:\"80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c\" exit_status:137 exited_at:{seconds:1742865632 nanos:854212427}" Mar 25 01:20:32.932488 containerd[1763]: time="2025-03-25T01:20:32.932372773Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c\" id:\"80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c\" pid:3489 exit_status:137 exited_at:{seconds:1742865632 nanos:854212427}" Mar 25 01:20:32.935577 containerd[1763]: time="2025-03-25T01:20:32.932735972Z" level=info msg="TearDown network for sandbox \"70a0f44371eb183af5de4e69a7fd44cb44125c241eb52df5e6e25c9b904afb6e\" successfully" Mar 25 01:20:32.935577 containerd[1763]: time="2025-03-25T01:20:32.932757772Z" level=info msg="StopPodSandbox for \"70a0f44371eb183af5de4e69a7fd44cb44125c241eb52df5e6e25c9b904afb6e\" returns successfully" Mar 25 01:20:32.935577 containerd[1763]: time="2025-03-25T01:20:32.932878532Z" level=info msg="received exit event sandbox_id:\"70a0f44371eb183af5de4e69a7fd44cb44125c241eb52df5e6e25c9b904afb6e\" exit_status:137 exited_at:{seconds:1742865632 nanos:849814634}" Mar 25 01:20:32.935004 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70a0f44371eb183af5de4e69a7fd44cb44125c241eb52df5e6e25c9b904afb6e-shm.mount: Deactivated successfully. Mar 25 01:20:32.935951 containerd[1763]: time="2025-03-25T01:20:32.935830687Z" level=info msg="TearDown network for sandbox \"80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c\" successfully" Mar 25 01:20:32.935951 containerd[1763]: time="2025-03-25T01:20:32.935856847Z" level=info msg="StopPodSandbox for \"80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c\" returns successfully" Mar 25 01:20:33.047276 kubelet[3362]: I0325 01:20:33.047160 3362 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-hostproc\") pod \"47896f08-5b5f-4441-b058-69942e254e71\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " Mar 25 01:20:33.048289 kubelet[3362]: I0325 01:20:33.047695 3362 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h4nn\" (UniqueName: \"kubernetes.io/projected/47896f08-5b5f-4441-b058-69942e254e71-kube-api-access-4h4nn\") pod \"47896f08-5b5f-4441-b058-69942e254e71\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " Mar 25 01:20:33.048289 kubelet[3362]: I0325 01:20:33.047725 3362 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5892ad3-b002-4ad4-b05f-81f696953085-cilium-config-path\") pod \"d5892ad3-b002-4ad4-b05f-81f696953085\" (UID: \"d5892ad3-b002-4ad4-b05f-81f696953085\") " Mar 25 01:20:33.048289 kubelet[3362]: I0325 01:20:33.047745 3362 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-cilium-run\") pod \"47896f08-5b5f-4441-b058-69942e254e71\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " Mar 25 01:20:33.048289 kubelet[3362]: I0325 01:20:33.047786 3362 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47896f08-5b5f-4441-b058-69942e254e71-clustermesh-secrets\") pod \"47896f08-5b5f-4441-b058-69942e254e71\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " Mar 25 01:20:33.048289 kubelet[3362]: I0325 01:20:33.047800 3362 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-xtables-lock\") pod \"47896f08-5b5f-4441-b058-69942e254e71\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " Mar 25 01:20:33.048289 kubelet[3362]: I0325 01:20:33.047812 3362 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-bpf-maps\") pod \"47896f08-5b5f-4441-b058-69942e254e71\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " Mar 25 01:20:33.048578 kubelet[3362]: I0325 01:20:33.047826 3362 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-etc-cni-netd\") pod \"47896f08-5b5f-4441-b058-69942e254e71\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " Mar 25 01:20:33.048578 kubelet[3362]: I0325 01:20:33.047840 3362 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-lib-modules\") pod \"47896f08-5b5f-4441-b058-69942e254e71\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " Mar 25 01:20:33.048578 kubelet[3362]: I0325 01:20:33.047855 3362 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-cni-path\") pod \"47896f08-5b5f-4441-b058-69942e254e71\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " Mar 25 01:20:33.048578 kubelet[3362]: I0325 01:20:33.047870 3362 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47896f08-5b5f-4441-b058-69942e254e71-hubble-tls\") pod \"47896f08-5b5f-4441-b058-69942e254e71\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " Mar 25 01:20:33.048578 kubelet[3362]: I0325 01:20:33.047884 3362 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-host-proc-sys-net\") pod \"47896f08-5b5f-4441-b058-69942e254e71\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " Mar 25 01:20:33.048578 kubelet[3362]: I0325 01:20:33.047898 3362 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-host-proc-sys-kernel\") pod \"47896f08-5b5f-4441-b058-69942e254e71\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " Mar 25 01:20:33.048702 kubelet[3362]: I0325 01:20:33.047913 3362 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-cilium-cgroup\") pod \"47896f08-5b5f-4441-b058-69942e254e71\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " Mar 25 01:20:33.048702 kubelet[3362]: I0325 01:20:33.047929 3362 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47896f08-5b5f-4441-b058-69942e254e71-cilium-config-path\") pod \"47896f08-5b5f-4441-b058-69942e254e71\" (UID: \"47896f08-5b5f-4441-b058-69942e254e71\") " Mar 25 01:20:33.048702 kubelet[3362]: I0325 01:20:33.047944 3362 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmr2c\" (UniqueName: \"kubernetes.io/projected/d5892ad3-b002-4ad4-b05f-81f696953085-kube-api-access-dmr2c\") pod \"d5892ad3-b002-4ad4-b05f-81f696953085\" (UID: \"d5892ad3-b002-4ad4-b05f-81f696953085\") " Mar 25 01:20:33.048702 kubelet[3362]: I0325 01:20:33.048370 3362 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "47896f08-5b5f-4441-b058-69942e254e71" (UID: "47896f08-5b5f-4441-b058-69942e254e71"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:20:33.048702 kubelet[3362]: I0325 01:20:33.048424 3362 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-hostproc" (OuterVolumeSpecName: "hostproc") pod "47896f08-5b5f-4441-b058-69942e254e71" (UID: "47896f08-5b5f-4441-b058-69942e254e71"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:20:33.049461 kubelet[3362]: I0325 01:20:33.049272 3362 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "47896f08-5b5f-4441-b058-69942e254e71" (UID: "47896f08-5b5f-4441-b058-69942e254e71"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:20:33.049461 kubelet[3362]: I0325 01:20:33.049318 3362 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-cni-path" (OuterVolumeSpecName: "cni-path") pod "47896f08-5b5f-4441-b058-69942e254e71" (UID: "47896f08-5b5f-4441-b058-69942e254e71"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:20:33.049828 kubelet[3362]: I0325 01:20:33.049688 3362 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "47896f08-5b5f-4441-b058-69942e254e71" (UID: "47896f08-5b5f-4441-b058-69942e254e71"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:20:33.050166 kubelet[3362]: I0325 01:20:33.050133 3362 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "47896f08-5b5f-4441-b058-69942e254e71" (UID: "47896f08-5b5f-4441-b058-69942e254e71"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:20:33.050464 kubelet[3362]: I0325 01:20:33.050307 3362 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "47896f08-5b5f-4441-b058-69942e254e71" (UID: "47896f08-5b5f-4441-b058-69942e254e71"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:20:33.052400 kubelet[3362]: I0325 01:20:33.052276 3362 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "47896f08-5b5f-4441-b058-69942e254e71" (UID: "47896f08-5b5f-4441-b058-69942e254e71"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:20:33.053536 kubelet[3362]: I0325 01:20:33.052520 3362 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "47896f08-5b5f-4441-b058-69942e254e71" (UID: "47896f08-5b5f-4441-b058-69942e254e71"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:20:33.053536 kubelet[3362]: I0325 01:20:33.052549 3362 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "47896f08-5b5f-4441-b058-69942e254e71" (UID: "47896f08-5b5f-4441-b058-69942e254e71"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:20:33.054070 kubelet[3362]: I0325 01:20:33.054026 3362 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47896f08-5b5f-4441-b058-69942e254e71-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "47896f08-5b5f-4441-b058-69942e254e71" (UID: "47896f08-5b5f-4441-b058-69942e254e71"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:20:33.054181 kubelet[3362]: I0325 01:20:33.054138 3362 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47896f08-5b5f-4441-b058-69942e254e71-kube-api-access-4h4nn" (OuterVolumeSpecName: "kube-api-access-4h4nn") pod "47896f08-5b5f-4441-b058-69942e254e71" (UID: "47896f08-5b5f-4441-b058-69942e254e71"). InnerVolumeSpecName "kube-api-access-4h4nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:20:33.054930 kubelet[3362]: I0325 01:20:33.054906 3362 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5892ad3-b002-4ad4-b05f-81f696953085-kube-api-access-dmr2c" (OuterVolumeSpecName: "kube-api-access-dmr2c") pod "d5892ad3-b002-4ad4-b05f-81f696953085" (UID: "d5892ad3-b002-4ad4-b05f-81f696953085"). InnerVolumeSpecName "kube-api-access-dmr2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:20:33.055369 kubelet[3362]: I0325 01:20:33.055339 3362 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5892ad3-b002-4ad4-b05f-81f696953085-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d5892ad3-b002-4ad4-b05f-81f696953085" (UID: "d5892ad3-b002-4ad4-b05f-81f696953085"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 25 01:20:33.056001 kubelet[3362]: I0325 01:20:33.055962 3362 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47896f08-5b5f-4441-b058-69942e254e71-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "47896f08-5b5f-4441-b058-69942e254e71" (UID: "47896f08-5b5f-4441-b058-69942e254e71"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 25 01:20:33.056438 kubelet[3362]: I0325 01:20:33.056420 3362 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47896f08-5b5f-4441-b058-69942e254e71-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "47896f08-5b5f-4441-b058-69942e254e71" (UID: "47896f08-5b5f-4441-b058-69942e254e71"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 25 01:20:33.148384 kubelet[3362]: I0325 01:20:33.148352 3362 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-bpf-maps\") on node \"ci-4284.0.0-a-be6d65597e\" DevicePath \"\"" Mar 25 01:20:33.148588 kubelet[3362]: I0325 01:20:33.148573 3362 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-etc-cni-netd\") on node \"ci-4284.0.0-a-be6d65597e\" DevicePath \"\"" Mar 25 01:20:33.148681 kubelet[3362]: I0325 01:20:33.148670 3362 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-lib-modules\") on node \"ci-4284.0.0-a-be6d65597e\" DevicePath \"\"" Mar 25 01:20:33.148757 kubelet[3362]: I0325 01:20:33.148747 3362 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47896f08-5b5f-4441-b058-69942e254e71-clustermesh-secrets\") on node \"ci-4284.0.0-a-be6d65597e\" DevicePath \"\"" Mar 25 01:20:33.148831 kubelet[3362]: I0325 01:20:33.148820 3362 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-xtables-lock\") on node \"ci-4284.0.0-a-be6d65597e\" DevicePath \"\"" Mar 25 01:20:33.148901 kubelet[3362]: I0325 01:20:33.148892 3362 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-host-proc-sys-net\") on node \"ci-4284.0.0-a-be6d65597e\" DevicePath \"\"" Mar 25 01:20:33.148971 kubelet[3362]: I0325 01:20:33.148948 3362 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-cni-path\") on node \"ci-4284.0.0-a-be6d65597e\" DevicePath \"\"" Mar 25 01:20:33.149033 kubelet[3362]: I0325 01:20:33.149024 3362 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47896f08-5b5f-4441-b058-69942e254e71-hubble-tls\") on node \"ci-4284.0.0-a-be6d65597e\" DevicePath \"\"" Mar 25 01:20:33.149124 kubelet[3362]: I0325 01:20:33.149114 3362 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-host-proc-sys-kernel\") on node \"ci-4284.0.0-a-be6d65597e\" DevicePath \"\"" Mar 25 01:20:33.149197 kubelet[3362]: I0325 01:20:33.149186 3362 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-cilium-cgroup\") on node \"ci-4284.0.0-a-be6d65597e\" DevicePath \"\"" Mar 25 01:20:33.149267 kubelet[3362]: I0325 01:20:33.149259 3362 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47896f08-5b5f-4441-b058-69942e254e71-cilium-config-path\") on node \"ci-4284.0.0-a-be6d65597e\" DevicePath \"\"" Mar 25 01:20:33.149334 kubelet[3362]: I0325 01:20:33.149310 3362 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dmr2c\" (UniqueName: \"kubernetes.io/projected/d5892ad3-b002-4ad4-b05f-81f696953085-kube-api-access-dmr2c\") on node \"ci-4284.0.0-a-be6d65597e\" DevicePath \"\"" Mar 25 01:20:33.149430 kubelet[3362]: I0325 01:20:33.149387 3362 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5892ad3-b002-4ad4-b05f-81f696953085-cilium-config-path\") on node \"ci-4284.0.0-a-be6d65597e\" DevicePath \"\"" Mar 25 01:20:33.149430 kubelet[3362]: I0325 01:20:33.149400 3362 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-hostproc\") on node \"ci-4284.0.0-a-be6d65597e\" DevicePath \"\"" Mar 25 01:20:33.149430 kubelet[3362]: I0325 01:20:33.149409 3362 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4h4nn\" (UniqueName: \"kubernetes.io/projected/47896f08-5b5f-4441-b058-69942e254e71-kube-api-access-4h4nn\") on node \"ci-4284.0.0-a-be6d65597e\" DevicePath \"\"" Mar 25 01:20:33.149430 kubelet[3362]: I0325 01:20:33.149418 3362 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47896f08-5b5f-4441-b058-69942e254e71-cilium-run\") on node \"ci-4284.0.0-a-be6d65597e\" DevicePath \"\"" Mar 25 01:20:33.443547 kubelet[3362]: E0325 01:20:33.443503 3362 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 25 01:20:33.720268 kubelet[3362]: I0325 01:20:33.719049 3362 scope.go:117] "RemoveContainer" containerID="4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4" Mar 25 01:20:33.728356 systemd[1]: Removed slice kubepods-burstable-pod47896f08_5b5f_4441_b058_69942e254e71.slice - libcontainer container kubepods-burstable-pod47896f08_5b5f_4441_b058_69942e254e71.slice. Mar 25 01:20:33.729177 systemd[1]: kubepods-burstable-pod47896f08_5b5f_4441_b058_69942e254e71.slice: Consumed 6.123s CPU time, 125.5M memory peak, 152K read from disk, 12.9M written to disk. Mar 25 01:20:33.729895 containerd[1763]: time="2025-03-25T01:20:33.728926049Z" level=info msg="RemoveContainer for \"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\"" Mar 25 01:20:33.732705 systemd[1]: Removed slice kubepods-besteffort-podd5892ad3_b002_4ad4_b05f_81f696953085.slice - libcontainer container kubepods-besteffort-podd5892ad3_b002_4ad4_b05f_81f696953085.slice. Mar 25 01:20:33.743034 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80d883575ab23e43a3fdb5981de8d271d4d00514512690ad46c9fd7fbd27d27c-shm.mount: Deactivated successfully. Mar 25 01:20:33.743317 systemd[1]: var-lib-kubelet-pods-d5892ad3\x2db002\x2d4ad4\x2db05f\x2d81f696953085-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddmr2c.mount: Deactivated successfully. Mar 25 01:20:33.743467 systemd[1]: var-lib-kubelet-pods-47896f08\x2d5b5f\x2d4441\x2db058\x2d69942e254e71-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4h4nn.mount: Deactivated successfully. Mar 25 01:20:33.743530 systemd[1]: var-lib-kubelet-pods-47896f08\x2d5b5f\x2d4441\x2db058\x2d69942e254e71-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 25 01:20:33.743581 systemd[1]: var-lib-kubelet-pods-47896f08\x2d5b5f\x2d4441\x2db058\x2d69942e254e71-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 25 01:20:33.750140 containerd[1763]: time="2025-03-25T01:20:33.750097932Z" level=info msg="RemoveContainer for \"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\" returns successfully" Mar 25 01:20:33.750474 kubelet[3362]: I0325 01:20:33.750417 3362 scope.go:117] "RemoveContainer" containerID="f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238" Mar 25 01:20:33.752086 containerd[1763]: time="2025-03-25T01:20:33.751997289Z" level=info msg="RemoveContainer for \"f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238\"" Mar 25 01:20:33.765318 containerd[1763]: time="2025-03-25T01:20:33.765136507Z" level=info msg="RemoveContainer for \"f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238\" returns successfully" Mar 25 01:20:33.766695 kubelet[3362]: I0325 01:20:33.766604 3362 scope.go:117] "RemoveContainer" containerID="708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8" Mar 25 01:20:33.774039 containerd[1763]: time="2025-03-25T01:20:33.772794173Z" level=info msg="RemoveContainer for \"708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8\"" Mar 25 01:20:33.784379 containerd[1763]: time="2025-03-25T01:20:33.784313674Z" level=info msg="RemoveContainer for \"708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8\" returns successfully" Mar 25 01:20:33.784651 kubelet[3362]: I0325 01:20:33.784624 3362 scope.go:117] "RemoveContainer" containerID="f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4" Mar 25 01:20:33.786578 containerd[1763]: time="2025-03-25T01:20:33.786531070Z" level=info msg="RemoveContainer for \"f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4\"" Mar 25 01:20:33.796587 containerd[1763]: time="2025-03-25T01:20:33.796513693Z" level=info msg="RemoveContainer for \"f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4\" returns successfully" Mar 25 01:20:33.796738 kubelet[3362]: I0325 01:20:33.796716 3362 scope.go:117] "RemoveContainer" containerID="4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1" Mar 25 01:20:33.798329 containerd[1763]: time="2025-03-25T01:20:33.798129890Z" level=info msg="RemoveContainer for \"4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1\"" Mar 25 01:20:33.809691 containerd[1763]: time="2025-03-25T01:20:33.809580110Z" level=info msg="RemoveContainer for \"4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1\" returns successfully" Mar 25 01:20:33.810403 containerd[1763]: time="2025-03-25T01:20:33.810282069Z" level=error msg="ContainerStatus for \"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\": not found" Mar 25 01:20:33.810463 kubelet[3362]: I0325 01:20:33.809816 3362 scope.go:117] "RemoveContainer" containerID="4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4" Mar 25 01:20:33.810463 kubelet[3362]: E0325 01:20:33.810427 3362 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\": not found" containerID="4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4" Mar 25 01:20:33.810565 kubelet[3362]: I0325 01:20:33.810474 3362 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4"} err="failed to get container status \"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\": rpc error: code = NotFound desc = an error occurred when try to find container \"4698d847b7dd0de85d24ecdba3cdb9b45392fc86e5beebe5b28d8a8c9be38ba4\": not found" Mar 25 01:20:33.810565 kubelet[3362]: I0325 01:20:33.810561 3362 scope.go:117] "RemoveContainer" containerID="f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238" Mar 25 01:20:33.810751 containerd[1763]: time="2025-03-25T01:20:33.810710668Z" level=error msg="ContainerStatus for \"f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238\": not found" Mar 25 01:20:33.810931 kubelet[3362]: E0325 01:20:33.810903 3362 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238\": not found" containerID="f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238" Mar 25 01:20:33.810986 kubelet[3362]: I0325 01:20:33.810930 3362 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238"} err="failed to get container status \"f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1b78d0789e028b49e8e3c3cd8e76074b8018942a1f773dd535785c49bb3a238\": not found" Mar 25 01:20:33.810986 kubelet[3362]: I0325 01:20:33.810951 3362 scope.go:117] "RemoveContainer" containerID="708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8" Mar 25 01:20:33.811196 containerd[1763]: time="2025-03-25T01:20:33.811162108Z" level=error msg="ContainerStatus for \"708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8\": not found" Mar 25 01:20:33.811299 kubelet[3362]: E0325 01:20:33.811260 3362 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8\": not found" containerID="708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8" Mar 25 01:20:33.811299 kubelet[3362]: I0325 01:20:33.811286 3362 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8"} err="failed to get container status \"708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"708c478a48feac091b9d769eee24c1a9f8a8bd1c1c9298527e0e5c9cdd1a91c8\": not found" Mar 25 01:20:33.811717 kubelet[3362]: I0325 01:20:33.811302 3362 scope.go:117] "RemoveContainer" containerID="f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4" Mar 25 01:20:33.811771 containerd[1763]: time="2025-03-25T01:20:33.811573987Z" level=error msg="ContainerStatus for \"f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4\": not found" Mar 25 01:20:33.811798 kubelet[3362]: E0325 01:20:33.811675 3362 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4\": not found" containerID="f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4" Mar 25 01:20:33.811798 kubelet[3362]: I0325 01:20:33.811733 3362 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4"} err="failed to get container status \"f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"f79f985346cc67e25fe4472ca5d7bb8b8cd0b593ac801d6514f7474bb794a4a4\": not found" Mar 25 01:20:33.811798 kubelet[3362]: I0325 01:20:33.811746 3362 scope.go:117] "RemoveContainer" containerID="4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1" Mar 25 01:20:33.812102 kubelet[3362]: E0325 01:20:33.812024 3362 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1\": not found" containerID="4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1" Mar 25 01:20:33.812143 containerd[1763]: time="2025-03-25T01:20:33.811901906Z" level=error msg="ContainerStatus for \"4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1\": not found" Mar 25 01:20:33.812168 kubelet[3362]: I0325 01:20:33.812108 3362 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1"} err="failed to get container status \"4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ad7a6c0523b037ba252e787eb42dae748770122f0215f8863bb826c8a258ac1\": not found" Mar 25 01:20:33.812168 kubelet[3362]: I0325 01:20:33.812124 3362 scope.go:117] "RemoveContainer" containerID="0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5" Mar 25 01:20:33.814127 containerd[1763]: time="2025-03-25T01:20:33.814092823Z" level=info msg="RemoveContainer for \"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5\"" Mar 25 01:20:33.824323 containerd[1763]: time="2025-03-25T01:20:33.824283725Z" level=info msg="RemoveContainer for \"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5\" returns successfully" Mar 25 01:20:33.824676 kubelet[3362]: I0325 01:20:33.824593 3362 scope.go:117] "RemoveContainer" containerID="0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5" Mar 25 01:20:33.825095 containerd[1763]: time="2025-03-25T01:20:33.825000324Z" level=error msg="ContainerStatus for \"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5\": not found" Mar 25 01:20:33.825193 kubelet[3362]: E0325 01:20:33.825146 3362 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5\": not found" containerID="0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5" Mar 25 01:20:33.825193 kubelet[3362]: I0325 01:20:33.825167 3362 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5"} err="failed to get container status \"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"0560b7c270f99210e506c0ef5f855afb79de15862e7c615d2c6a8283281118a5\": not found" Mar 25 01:20:34.338528 kubelet[3362]: I0325 01:20:34.337693 3362 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47896f08-5b5f-4441-b058-69942e254e71" path="/var/lib/kubelet/pods/47896f08-5b5f-4441-b058-69942e254e71/volumes" Mar 25 01:20:34.338528 kubelet[3362]: I0325 01:20:34.338232 3362 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5892ad3-b002-4ad4-b05f-81f696953085" path="/var/lib/kubelet/pods/d5892ad3-b002-4ad4-b05f-81f696953085/volumes" Mar 25 01:20:34.699315 sshd[4908]: Connection closed by 10.200.16.10 port 53036 Mar 25 01:20:34.700089 sshd-session[4906]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:34.705310 systemd[1]: sshd@22-10.200.20.47:22-10.200.16.10:53036.service: Deactivated successfully. Mar 25 01:20:34.709242 systemd[1]: session-25.scope: Deactivated successfully. Mar 25 01:20:34.710349 systemd[1]: session-25.scope: Consumed 1.275s CPU time, 23.6M memory peak. Mar 25 01:20:34.714010 systemd-logind[1741]: Session 25 logged out. Waiting for processes to exit. Mar 25 01:20:34.716595 systemd-logind[1741]: Removed session 25. Mar 25 01:20:34.791987 systemd[1]: Started sshd@23-10.200.20.47:22-10.200.16.10:53046.service - OpenSSH per-connection server daemon (10.200.16.10:53046). Mar 25 01:20:35.282661 sshd[5055]: Accepted publickey for core from 10.200.16.10 port 53046 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:35.284423 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:35.289287 systemd-logind[1741]: New session 26 of user core. Mar 25 01:20:35.292592 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 25 01:20:36.310192 kubelet[3362]: I0325 01:20:36.310049 3362 topology_manager.go:215] "Topology Admit Handler" podUID="a4e152c1-8c7f-4285-a306-5ae9b8e65b2c" podNamespace="kube-system" podName="cilium-49rqt" Mar 25 01:20:36.310942 kubelet[3362]: E0325 01:20:36.310396 3362 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47896f08-5b5f-4441-b058-69942e254e71" containerName="apply-sysctl-overwrites" Mar 25 01:20:36.310942 kubelet[3362]: E0325 01:20:36.310412 3362 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47896f08-5b5f-4441-b058-69942e254e71" containerName="mount-bpf-fs" Mar 25 01:20:36.310942 kubelet[3362]: E0325 01:20:36.310418 3362 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47896f08-5b5f-4441-b058-69942e254e71" containerName="clean-cilium-state" Mar 25 01:20:36.310942 kubelet[3362]: E0325 01:20:36.310425 3362 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47896f08-5b5f-4441-b058-69942e254e71" containerName="mount-cgroup" Mar 25 01:20:36.310942 kubelet[3362]: E0325 01:20:36.310431 3362 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47896f08-5b5f-4441-b058-69942e254e71" containerName="cilium-agent" Mar 25 01:20:36.310942 kubelet[3362]: E0325 01:20:36.310436 3362 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5892ad3-b002-4ad4-b05f-81f696953085" containerName="cilium-operator" Mar 25 01:20:36.310942 kubelet[3362]: I0325 01:20:36.310573 3362 memory_manager.go:354] "RemoveStaleState removing state" podUID="47896f08-5b5f-4441-b058-69942e254e71" containerName="cilium-agent" Mar 25 01:20:36.310942 kubelet[3362]: I0325 01:20:36.310580 3362 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5892ad3-b002-4ad4-b05f-81f696953085" containerName="cilium-operator" Mar 25 01:20:36.321241 systemd[1]: Created slice kubepods-burstable-poda4e152c1_8c7f_4285_a306_5ae9b8e65b2c.slice - libcontainer container kubepods-burstable-poda4e152c1_8c7f_4285_a306_5ae9b8e65b2c.slice. Mar 25 01:20:36.364480 kubelet[3362]: I0325 01:20:36.364372 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a4e152c1-8c7f-4285-a306-5ae9b8e65b2c-etc-cni-netd\") pod \"cilium-49rqt\" (UID: \"a4e152c1-8c7f-4285-a306-5ae9b8e65b2c\") " pod="kube-system/cilium-49rqt" Mar 25 01:20:36.364480 kubelet[3362]: I0325 01:20:36.364412 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a4e152c1-8c7f-4285-a306-5ae9b8e65b2c-cilium-run\") pod \"cilium-49rqt\" (UID: \"a4e152c1-8c7f-4285-a306-5ae9b8e65b2c\") " pod="kube-system/cilium-49rqt" Mar 25 01:20:36.364480 kubelet[3362]: I0325 01:20:36.364431 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a4e152c1-8c7f-4285-a306-5ae9b8e65b2c-cni-path\") pod \"cilium-49rqt\" (UID: \"a4e152c1-8c7f-4285-a306-5ae9b8e65b2c\") " pod="kube-system/cilium-49rqt" Mar 25 01:20:36.364480 kubelet[3362]: I0325 01:20:36.364460 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a4e152c1-8c7f-4285-a306-5ae9b8e65b2c-bpf-maps\") pod \"cilium-49rqt\" (UID: \"a4e152c1-8c7f-4285-a306-5ae9b8e65b2c\") " pod="kube-system/cilium-49rqt" Mar 25 01:20:36.364947 kubelet[3362]: I0325 01:20:36.364492 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4e152c1-8c7f-4285-a306-5ae9b8e65b2c-lib-modules\") pod \"cilium-49rqt\" (UID: \"a4e152c1-8c7f-4285-a306-5ae9b8e65b2c\") " pod="kube-system/cilium-49rqt" Mar 25 01:20:36.364947 kubelet[3362]: I0325 01:20:36.364529 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a4e152c1-8c7f-4285-a306-5ae9b8e65b2c-cilium-cgroup\") pod \"cilium-49rqt\" (UID: \"a4e152c1-8c7f-4285-a306-5ae9b8e65b2c\") " pod="kube-system/cilium-49rqt" Mar 25 01:20:36.364947 kubelet[3362]: I0325 01:20:36.364552 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a4e152c1-8c7f-4285-a306-5ae9b8e65b2c-clustermesh-secrets\") pod \"cilium-49rqt\" (UID: \"a4e152c1-8c7f-4285-a306-5ae9b8e65b2c\") " pod="kube-system/cilium-49rqt" Mar 25 01:20:36.364947 kubelet[3362]: I0325 01:20:36.364570 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4e152c1-8c7f-4285-a306-5ae9b8e65b2c-cilium-config-path\") pod \"cilium-49rqt\" (UID: \"a4e152c1-8c7f-4285-a306-5ae9b8e65b2c\") " pod="kube-system/cilium-49rqt" Mar 25 01:20:36.364947 kubelet[3362]: I0325 01:20:36.364586 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a4e152c1-8c7f-4285-a306-5ae9b8e65b2c-host-proc-sys-net\") pod \"cilium-49rqt\" (UID: \"a4e152c1-8c7f-4285-a306-5ae9b8e65b2c\") " pod="kube-system/cilium-49rqt" Mar 25 01:20:36.365107 kubelet[3362]: I0325 01:20:36.364600 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a4e152c1-8c7f-4285-a306-5ae9b8e65b2c-host-proc-sys-kernel\") pod \"cilium-49rqt\" (UID: \"a4e152c1-8c7f-4285-a306-5ae9b8e65b2c\") " pod="kube-system/cilium-49rqt" Mar 25 01:20:36.365107 kubelet[3362]: I0325 01:20:36.364633 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a4e152c1-8c7f-4285-a306-5ae9b8e65b2c-hostproc\") pod \"cilium-49rqt\" (UID: \"a4e152c1-8c7f-4285-a306-5ae9b8e65b2c\") " pod="kube-system/cilium-49rqt" Mar 25 01:20:36.365107 kubelet[3362]: I0325 01:20:36.364664 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4e152c1-8c7f-4285-a306-5ae9b8e65b2c-xtables-lock\") pod \"cilium-49rqt\" (UID: \"a4e152c1-8c7f-4285-a306-5ae9b8e65b2c\") " pod="kube-system/cilium-49rqt" Mar 25 01:20:36.365107 kubelet[3362]: I0325 01:20:36.364686 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a4e152c1-8c7f-4285-a306-5ae9b8e65b2c-cilium-ipsec-secrets\") pod \"cilium-49rqt\" (UID: \"a4e152c1-8c7f-4285-a306-5ae9b8e65b2c\") " pod="kube-system/cilium-49rqt" Mar 25 01:20:36.365107 kubelet[3362]: I0325 01:20:36.364706 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a4e152c1-8c7f-4285-a306-5ae9b8e65b2c-hubble-tls\") pod \"cilium-49rqt\" (UID: \"a4e152c1-8c7f-4285-a306-5ae9b8e65b2c\") " pod="kube-system/cilium-49rqt" Mar 25 01:20:36.365107 kubelet[3362]: I0325 01:20:36.364727 3362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwb8r\" (UniqueName: \"kubernetes.io/projected/a4e152c1-8c7f-4285-a306-5ae9b8e65b2c-kube-api-access-pwb8r\") pod \"cilium-49rqt\" (UID: \"a4e152c1-8c7f-4285-a306-5ae9b8e65b2c\") " pod="kube-system/cilium-49rqt" Mar 25 01:20:36.378769 sshd[5057]: Connection closed by 10.200.16.10 port 53046 Mar 25 01:20:36.379641 sshd-session[5055]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:36.384721 systemd-logind[1741]: Session 26 logged out. Waiting for processes to exit. Mar 25 01:20:36.385594 systemd[1]: sshd@23-10.200.20.47:22-10.200.16.10:53046.service: Deactivated successfully. Mar 25 01:20:36.388093 systemd[1]: session-26.scope: Deactivated successfully. Mar 25 01:20:36.389824 systemd-logind[1741]: Removed session 26. Mar 25 01:20:36.464303 systemd[1]: Started sshd@24-10.200.20.47:22-10.200.16.10:53050.service - OpenSSH per-connection server daemon (10.200.16.10:53050). Mar 25 01:20:36.627582 containerd[1763]: time="2025-03-25T01:20:36.627433882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-49rqt,Uid:a4e152c1-8c7f-4285-a306-5ae9b8e65b2c,Namespace:kube-system,Attempt:0,}" Mar 25 01:20:36.687953 containerd[1763]: time="2025-03-25T01:20:36.687867772Z" level=info msg="connecting to shim 9eb5c9fc2278777cd323bcbae6f42a9bac747889d4c484fd42f2847a1838b2f2" address="unix:///run/containerd/s/6ad01e33affb5f42de54ed0f2b992b4ca7ed59b1c8ddc38e8ba887788aedc6ee" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:20:36.710654 systemd[1]: Started cri-containerd-9eb5c9fc2278777cd323bcbae6f42a9bac747889d4c484fd42f2847a1838b2f2.scope - libcontainer container 9eb5c9fc2278777cd323bcbae6f42a9bac747889d4c484fd42f2847a1838b2f2. Mar 25 01:20:36.753618 containerd[1763]: time="2025-03-25T01:20:36.753572249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-49rqt,Uid:a4e152c1-8c7f-4285-a306-5ae9b8e65b2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9eb5c9fc2278777cd323bcbae6f42a9bac747889d4c484fd42f2847a1838b2f2\"" Mar 25 01:20:36.759600 containerd[1763]: time="2025-03-25T01:20:36.759558434Z" level=info msg="CreateContainer within sandbox \"9eb5c9fc2278777cd323bcbae6f42a9bac747889d4c484fd42f2847a1838b2f2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 25 01:20:36.795300 containerd[1763]: time="2025-03-25T01:20:36.794914746Z" level=info msg="Container 58b9bd3ac86995b20474cc14ff3313e93e77be79dc972b146941d0636049faad: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:20:36.814301 containerd[1763]: time="2025-03-25T01:20:36.814256418Z" level=info msg="CreateContainer within sandbox \"9eb5c9fc2278777cd323bcbae6f42a9bac747889d4c484fd42f2847a1838b2f2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"58b9bd3ac86995b20474cc14ff3313e93e77be79dc972b146941d0636049faad\"" Mar 25 01:20:36.815105 containerd[1763]: time="2025-03-25T01:20:36.815073376Z" level=info msg="StartContainer for \"58b9bd3ac86995b20474cc14ff3313e93e77be79dc972b146941d0636049faad\"" Mar 25 01:20:36.815986 containerd[1763]: time="2025-03-25T01:20:36.815949654Z" level=info msg="connecting to shim 58b9bd3ac86995b20474cc14ff3313e93e77be79dc972b146941d0636049faad" address="unix:///run/containerd/s/6ad01e33affb5f42de54ed0f2b992b4ca7ed59b1c8ddc38e8ba887788aedc6ee" protocol=ttrpc version=3 Mar 25 01:20:36.837718 systemd[1]: Started cri-containerd-58b9bd3ac86995b20474cc14ff3313e93e77be79dc972b146941d0636049faad.scope - libcontainer container 58b9bd3ac86995b20474cc14ff3313e93e77be79dc972b146941d0636049faad. Mar 25 01:20:36.873776 containerd[1763]: time="2025-03-25T01:20:36.873730270Z" level=info msg="StartContainer for \"58b9bd3ac86995b20474cc14ff3313e93e77be79dc972b146941d0636049faad\" returns successfully" Mar 25 01:20:36.877787 systemd[1]: cri-containerd-58b9bd3ac86995b20474cc14ff3313e93e77be79dc972b146941d0636049faad.scope: Deactivated successfully. Mar 25 01:20:36.880099 containerd[1763]: time="2025-03-25T01:20:36.879911255Z" level=info msg="received exit event container_id:\"58b9bd3ac86995b20474cc14ff3313e93e77be79dc972b146941d0636049faad\" id:\"58b9bd3ac86995b20474cc14ff3313e93e77be79dc972b146941d0636049faad\" pid:5131 exited_at:{seconds:1742865636 nanos:879590816}" Mar 25 01:20:36.880466 containerd[1763]: time="2025-03-25T01:20:36.880420894Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58b9bd3ac86995b20474cc14ff3313e93e77be79dc972b146941d0636049faad\" id:\"58b9bd3ac86995b20474cc14ff3313e93e77be79dc972b146941d0636049faad\" pid:5131 exited_at:{seconds:1742865636 nanos:879590816}" Mar 25 01:20:36.936521 sshd[5068]: Accepted publickey for core from 10.200.16.10 port 53050 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:36.959029 sshd-session[5068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:36.964902 systemd-logind[1741]: New session 27 of user core. Mar 25 01:20:36.970611 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 25 01:20:37.271153 sshd[5165]: Connection closed by 10.200.16.10 port 53050 Mar 25 01:20:37.270971 sshd-session[5068]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:37.275221 systemd[1]: sshd@24-10.200.20.47:22-10.200.16.10:53050.service: Deactivated successfully. Mar 25 01:20:37.277550 systemd[1]: session-27.scope: Deactivated successfully. Mar 25 01:20:37.278664 systemd-logind[1741]: Session 27 logged out. Waiting for processes to exit. Mar 25 01:20:37.279592 systemd-logind[1741]: Removed session 27. Mar 25 01:20:37.355646 systemd[1]: Started sshd@25-10.200.20.47:22-10.200.16.10:53054.service - OpenSSH per-connection server daemon (10.200.16.10:53054). Mar 25 01:20:37.745351 containerd[1763]: time="2025-03-25T01:20:37.745296787Z" level=info msg="CreateContainer within sandbox \"9eb5c9fc2278777cd323bcbae6f42a9bac747889d4c484fd42f2847a1838b2f2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 25 01:20:37.789960 containerd[1763]: time="2025-03-25T01:20:37.788628839Z" level=info msg="Container 2006f12db3fad3e3b38fe6b66d767616b32fca16d79fa203a26e4c8c393aebc7: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:20:37.811547 sshd[5172]: Accepted publickey for core from 10.200.16.10 port 53054 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:37.813283 sshd-session[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:37.816428 containerd[1763]: time="2025-03-25T01:20:37.816387211Z" level=info msg="CreateContainer within sandbox \"9eb5c9fc2278777cd323bcbae6f42a9bac747889d4c484fd42f2847a1838b2f2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2006f12db3fad3e3b38fe6b66d767616b32fca16d79fa203a26e4c8c393aebc7\"" Mar 25 01:20:37.817352 containerd[1763]: time="2025-03-25T01:20:37.817182529Z" level=info msg="StartContainer for \"2006f12db3fad3e3b38fe6b66d767616b32fca16d79fa203a26e4c8c393aebc7\"" Mar 25 01:20:37.819007 containerd[1763]: time="2025-03-25T01:20:37.818772845Z" level=info msg="connecting to shim 2006f12db3fad3e3b38fe6b66d767616b32fca16d79fa203a26e4c8c393aebc7" address="unix:///run/containerd/s/6ad01e33affb5f42de54ed0f2b992b4ca7ed59b1c8ddc38e8ba887788aedc6ee" protocol=ttrpc version=3 Mar 25 01:20:37.827229 systemd-logind[1741]: New session 28 of user core. Mar 25 01:20:37.836150 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 25 01:20:37.851660 systemd[1]: Started cri-containerd-2006f12db3fad3e3b38fe6b66d767616b32fca16d79fa203a26e4c8c393aebc7.scope - libcontainer container 2006f12db3fad3e3b38fe6b66d767616b32fca16d79fa203a26e4c8c393aebc7. Mar 25 01:20:37.890239 containerd[1763]: time="2025-03-25T01:20:37.890136027Z" level=info msg="StartContainer for \"2006f12db3fad3e3b38fe6b66d767616b32fca16d79fa203a26e4c8c393aebc7\" returns successfully" Mar 25 01:20:37.892263 systemd[1]: cri-containerd-2006f12db3fad3e3b38fe6b66d767616b32fca16d79fa203a26e4c8c393aebc7.scope: Deactivated successfully. Mar 25 01:20:37.895348 containerd[1763]: time="2025-03-25T01:20:37.894996775Z" level=info msg="received exit event container_id:\"2006f12db3fad3e3b38fe6b66d767616b32fca16d79fa203a26e4c8c393aebc7\" id:\"2006f12db3fad3e3b38fe6b66d767616b32fca16d79fa203a26e4c8c393aebc7\" pid:5189 exited_at:{seconds:1742865637 nanos:894541617}" Mar 25 01:20:37.895548 containerd[1763]: time="2025-03-25T01:20:37.895303855Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2006f12db3fad3e3b38fe6b66d767616b32fca16d79fa203a26e4c8c393aebc7\" id:\"2006f12db3fad3e3b38fe6b66d767616b32fca16d79fa203a26e4c8c393aebc7\" pid:5189 exited_at:{seconds:1742865637 nanos:894541617}" Mar 25 01:20:37.915095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2006f12db3fad3e3b38fe6b66d767616b32fca16d79fa203a26e4c8c393aebc7-rootfs.mount: Deactivated successfully. Mar 25 01:20:38.445466 kubelet[3362]: E0325 01:20:38.445415 3362 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 25 01:20:38.748494 containerd[1763]: time="2025-03-25T01:20:38.748291857Z" level=info msg="CreateContainer within sandbox \"9eb5c9fc2278777cd323bcbae6f42a9bac747889d4c484fd42f2847a1838b2f2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 25 01:20:38.796520 containerd[1763]: time="2025-03-25T01:20:38.796401218Z" level=info msg="Container 0a0b0b5bd842c822cf41e1e9c39e3d9e4a2b47e7a6f27f9d0d25a20ab1afe0ac: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:20:38.796852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4255758917.mount: Deactivated successfully. Mar 25 01:20:38.822664 containerd[1763]: time="2025-03-25T01:20:38.822528433Z" level=info msg="CreateContainer within sandbox \"9eb5c9fc2278777cd323bcbae6f42a9bac747889d4c484fd42f2847a1838b2f2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0a0b0b5bd842c822cf41e1e9c39e3d9e4a2b47e7a6f27f9d0d25a20ab1afe0ac\"" Mar 25 01:20:38.825154 containerd[1763]: time="2025-03-25T01:20:38.823181552Z" level=info msg="StartContainer for \"0a0b0b5bd842c822cf41e1e9c39e3d9e4a2b47e7a6f27f9d0d25a20ab1afe0ac\"" Mar 25 01:20:38.825154 containerd[1763]: time="2025-03-25T01:20:38.824611668Z" level=info msg="connecting to shim 0a0b0b5bd842c822cf41e1e9c39e3d9e4a2b47e7a6f27f9d0d25a20ab1afe0ac" address="unix:///run/containerd/s/6ad01e33affb5f42de54ed0f2b992b4ca7ed59b1c8ddc38e8ba887788aedc6ee" protocol=ttrpc version=3 Mar 25 01:20:38.847656 systemd[1]: Started cri-containerd-0a0b0b5bd842c822cf41e1e9c39e3d9e4a2b47e7a6f27f9d0d25a20ab1afe0ac.scope - libcontainer container 0a0b0b5bd842c822cf41e1e9c39e3d9e4a2b47e7a6f27f9d0d25a20ab1afe0ac. Mar 25 01:20:38.884998 systemd[1]: cri-containerd-0a0b0b5bd842c822cf41e1e9c39e3d9e4a2b47e7a6f27f9d0d25a20ab1afe0ac.scope: Deactivated successfully. Mar 25 01:20:38.887403 containerd[1763]: time="2025-03-25T01:20:38.887355152Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a0b0b5bd842c822cf41e1e9c39e3d9e4a2b47e7a6f27f9d0d25a20ab1afe0ac\" id:\"0a0b0b5bd842c822cf41e1e9c39e3d9e4a2b47e7a6f27f9d0d25a20ab1afe0ac\" pid:5241 exited_at:{seconds:1742865638 nanos:886861914}" Mar 25 01:20:38.887878 containerd[1763]: time="2025-03-25T01:20:38.887845151Z" level=info msg="received exit event container_id:\"0a0b0b5bd842c822cf41e1e9c39e3d9e4a2b47e7a6f27f9d0d25a20ab1afe0ac\" id:\"0a0b0b5bd842c822cf41e1e9c39e3d9e4a2b47e7a6f27f9d0d25a20ab1afe0ac\" pid:5241 exited_at:{seconds:1742865638 nanos:886861914}" Mar 25 01:20:38.890555 containerd[1763]: time="2025-03-25T01:20:38.890385385Z" level=info msg="StartContainer for \"0a0b0b5bd842c822cf41e1e9c39e3d9e4a2b47e7a6f27f9d0d25a20ab1afe0ac\" returns successfully" Mar 25 01:20:38.912169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a0b0b5bd842c822cf41e1e9c39e3d9e4a2b47e7a6f27f9d0d25a20ab1afe0ac-rootfs.mount: Deactivated successfully. Mar 25 01:20:39.755292 containerd[1763]: time="2025-03-25T01:20:39.755210758Z" level=info msg="CreateContainer within sandbox \"9eb5c9fc2278777cd323bcbae6f42a9bac747889d4c484fd42f2847a1838b2f2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 25 01:20:39.784955 containerd[1763]: time="2025-03-25T01:20:39.784680325Z" level=info msg="Container 9552d8a9052f691d2e7608d96ba97b91bfe3a7cc8cc6d551335fee019ae9e47a: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:20:39.802816 containerd[1763]: time="2025-03-25T01:20:39.802773160Z" level=info msg="CreateContainer within sandbox \"9eb5c9fc2278777cd323bcbae6f42a9bac747889d4c484fd42f2847a1838b2f2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9552d8a9052f691d2e7608d96ba97b91bfe3a7cc8cc6d551335fee019ae9e47a\"" Mar 25 01:20:39.803674 containerd[1763]: time="2025-03-25T01:20:39.803291199Z" level=info msg="StartContainer for \"9552d8a9052f691d2e7608d96ba97b91bfe3a7cc8cc6d551335fee019ae9e47a\"" Mar 25 01:20:39.804951 containerd[1763]: time="2025-03-25T01:20:39.804897475Z" level=info msg="connecting to shim 9552d8a9052f691d2e7608d96ba97b91bfe3a7cc8cc6d551335fee019ae9e47a" address="unix:///run/containerd/s/6ad01e33affb5f42de54ed0f2b992b4ca7ed59b1c8ddc38e8ba887788aedc6ee" protocol=ttrpc version=3 Mar 25 01:20:39.826657 systemd[1]: Started cri-containerd-9552d8a9052f691d2e7608d96ba97b91bfe3a7cc8cc6d551335fee019ae9e47a.scope - libcontainer container 9552d8a9052f691d2e7608d96ba97b91bfe3a7cc8cc6d551335fee019ae9e47a. Mar 25 01:20:39.858904 systemd[1]: cri-containerd-9552d8a9052f691d2e7608d96ba97b91bfe3a7cc8cc6d551335fee019ae9e47a.scope: Deactivated successfully. Mar 25 01:20:39.859354 containerd[1763]: time="2025-03-25T01:20:39.858985941Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9552d8a9052f691d2e7608d96ba97b91bfe3a7cc8cc6d551335fee019ae9e47a\" id:\"9552d8a9052f691d2e7608d96ba97b91bfe3a7cc8cc6d551335fee019ae9e47a\" pid:5279 exited_at:{seconds:1742865639 nanos:858723461}" Mar 25 01:20:39.864173 containerd[1763]: time="2025-03-25T01:20:39.864121768Z" level=info msg="received exit event container_id:\"9552d8a9052f691d2e7608d96ba97b91bfe3a7cc8cc6d551335fee019ae9e47a\" id:\"9552d8a9052f691d2e7608d96ba97b91bfe3a7cc8cc6d551335fee019ae9e47a\" pid:5279 exited_at:{seconds:1742865639 nanos:858723461}" Mar 25 01:20:39.874981 containerd[1763]: time="2025-03-25T01:20:39.874755182Z" level=info msg="StartContainer for \"9552d8a9052f691d2e7608d96ba97b91bfe3a7cc8cc6d551335fee019ae9e47a\" returns successfully" Mar 25 01:20:39.887091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9552d8a9052f691d2e7608d96ba97b91bfe3a7cc8cc6d551335fee019ae9e47a-rootfs.mount: Deactivated successfully. Mar 25 01:20:40.759227 containerd[1763]: time="2025-03-25T01:20:40.759181906Z" level=info msg="CreateContainer within sandbox \"9eb5c9fc2278777cd323bcbae6f42a9bac747889d4c484fd42f2847a1838b2f2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 25 01:20:40.803362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1260162939.mount: Deactivated successfully. Mar 25 01:20:40.806202 containerd[1763]: time="2025-03-25T01:20:40.804937513Z" level=info msg="Container 19ab5f289ee3a02b41b0b00609ea048b9df8a14d4f848fc66bdd3cb03ed7aeb0: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:20:40.831862 containerd[1763]: time="2025-03-25T01:20:40.831815326Z" level=info msg="CreateContainer within sandbox \"9eb5c9fc2278777cd323bcbae6f42a9bac747889d4c484fd42f2847a1838b2f2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"19ab5f289ee3a02b41b0b00609ea048b9df8a14d4f848fc66bdd3cb03ed7aeb0\"" Mar 25 01:20:40.832560 containerd[1763]: time="2025-03-25T01:20:40.832332085Z" level=info msg="StartContainer for \"19ab5f289ee3a02b41b0b00609ea048b9df8a14d4f848fc66bdd3cb03ed7aeb0\"" Mar 25 01:20:40.834165 containerd[1763]: time="2025-03-25T01:20:40.834085840Z" level=info msg="connecting to shim 19ab5f289ee3a02b41b0b00609ea048b9df8a14d4f848fc66bdd3cb03ed7aeb0" address="unix:///run/containerd/s/6ad01e33affb5f42de54ed0f2b992b4ca7ed59b1c8ddc38e8ba887788aedc6ee" protocol=ttrpc version=3 Mar 25 01:20:40.856666 systemd[1]: Started cri-containerd-19ab5f289ee3a02b41b0b00609ea048b9df8a14d4f848fc66bdd3cb03ed7aeb0.scope - libcontainer container 19ab5f289ee3a02b41b0b00609ea048b9df8a14d4f848fc66bdd3cb03ed7aeb0. Mar 25 01:20:40.892846 containerd[1763]: time="2025-03-25T01:20:40.892756055Z" level=info msg="StartContainer for \"19ab5f289ee3a02b41b0b00609ea048b9df8a14d4f848fc66bdd3cb03ed7aeb0\" returns successfully" Mar 25 01:20:40.943542 containerd[1763]: time="2025-03-25T01:20:40.942933530Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19ab5f289ee3a02b41b0b00609ea048b9df8a14d4f848fc66bdd3cb03ed7aeb0\" id:\"4377feef2d0b9846d64129d0e4ebe5cf0630d691c305f3a18d410dc9cddddea7\" pid:5344 exited_at:{seconds:1742865640 nanos:942660571}" Mar 25 01:20:41.468500 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 25 01:20:42.094474 kubelet[3362]: I0325 01:20:42.094189 3362 setters.go:580] "Node became not ready" node="ci-4284.0.0-a-be6d65597e" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-25T01:20:42Z","lastTransitionTime":"2025-03-25T01:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 25 01:20:42.255976 containerd[1763]: time="2025-03-25T01:20:42.255936751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19ab5f289ee3a02b41b0b00609ea048b9df8a14d4f848fc66bdd3cb03ed7aeb0\" id:\"472b308298dc3cd1f48784f18e77a944d4aa8f5e9285dc1eb69393b6acb8fa5f\" pid:5424 exit_status:1 exited_at:{seconds:1742865642 nanos:255183113}" Mar 25 01:20:44.146639 systemd-networkd[1487]: lxc_health: Link UP Mar 25 01:20:44.156643 systemd-networkd[1487]: lxc_health: Gained carrier Mar 25 01:20:44.434843 containerd[1763]: time="2025-03-25T01:20:44.434792234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19ab5f289ee3a02b41b0b00609ea048b9df8a14d4f848fc66bdd3cb03ed7aeb0\" id:\"f60150251f8a87e1024617113e9daf876da33819a67e68c77d3b8909e2d92c15\" pid:5864 exited_at:{seconds:1742865644 nanos:433960296}" Mar 25 01:20:44.651502 kubelet[3362]: I0325 01:20:44.651107 3362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-49rqt" podStartSLOduration=8.651090112 podStartE2EDuration="8.651090112s" podCreationTimestamp="2025-03-25 01:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:20:41.784689481 +0000 UTC m=+163.541911008" watchObservedRunningTime="2025-03-25 01:20:44.651090112 +0000 UTC m=+166.408311679" Mar 25 01:20:45.770638 systemd-networkd[1487]: lxc_health: Gained IPv6LL Mar 25 01:20:46.576493 containerd[1763]: time="2025-03-25T01:20:46.576117072Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19ab5f289ee3a02b41b0b00609ea048b9df8a14d4f848fc66bdd3cb03ed7aeb0\" id:\"ba334966f5975cbf7a90702c2287c6b0a566677f99f23da8eb8f76254056bd31\" pid:5904 exited_at:{seconds:1742865646 nanos:575809594}" Mar 25 01:20:48.692173 containerd[1763]: time="2025-03-25T01:20:48.692116372Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19ab5f289ee3a02b41b0b00609ea048b9df8a14d4f848fc66bdd3cb03ed7aeb0\" id:\"32f75fa2c5bfc255257a322f62b493d321b02939a2dd9add83aee55cdd64d465\" pid:5935 exited_at:{seconds:1742865648 nanos:691658454}" Mar 25 01:20:50.818395 containerd[1763]: time="2025-03-25T01:20:50.818321415Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19ab5f289ee3a02b41b0b00609ea048b9df8a14d4f848fc66bdd3cb03ed7aeb0\" id:\"d99fa039f232933e18d2852b935d93a83a67f1f62248ddc951e60edffbc990b4\" pid:5959 exited_at:{seconds:1742865650 nanos:817692819}" Mar 25 01:20:50.906879 sshd[5184]: Connection closed by 10.200.16.10 port 53054 Mar 25 01:20:50.906693 sshd-session[5172]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:50.910524 systemd-logind[1741]: Session 28 logged out. Waiting for processes to exit. Mar 25 01:20:50.911125 systemd[1]: sshd@25-10.200.20.47:22-10.200.16.10:53054.service: Deactivated successfully. Mar 25 01:20:50.913759 systemd[1]: session-28.scope: Deactivated successfully. Mar 25 01:20:50.915194 systemd-logind[1741]: Removed session 28.