Jan 29 16:07:01.336309 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 16:07:01.336330 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Jan 29 14:53:00 -00 2025 Jan 29 16:07:01.336338 kernel: KASLR enabled Jan 29 16:07:01.336343 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 29 16:07:01.336350 kernel: printk: bootconsole [pl11] enabled Jan 29 16:07:01.336356 kernel: efi: EFI v2.7 by EDK II Jan 29 16:07:01.336363 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jan 29 16:07:01.336368 kernel: random: crng init done Jan 29 16:07:01.336374 kernel: secureboot: Secure boot disabled Jan 29 16:07:01.336380 kernel: ACPI: Early table checksum verification disabled Jan 29 16:07:01.336385 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 29 16:07:01.336391 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:07:01.336397 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:07:01.336405 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 29 16:07:01.336412 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:07:01.336418 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:07:01.336424 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:07:01.336431 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:07:01.336438 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:07:01.336444 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:07:01.336450 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 29 16:07:01.336456 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:07:01.336462 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 29 16:07:01.336468 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 29 16:07:01.336474 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 29 16:07:01.336480 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 29 16:07:01.336486 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 29 16:07:01.336492 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 29 16:07:01.336500 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 29 16:07:01.336506 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 29 16:07:01.336512 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 29 16:07:01.336518 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 29 16:07:01.336524 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 29 16:07:01.336530 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 29 16:07:01.336537 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 29 16:07:01.336543 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jan 29 16:07:01.336549 kernel: Zone ranges: Jan 29 16:07:01.336555 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 29 16:07:01.336561 kernel: DMA32 empty Jan 29 16:07:01.336567 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 29 16:07:01.336577 kernel: Movable zone start for each node Jan 29 16:07:01.336583 kernel: Early memory node ranges Jan 29 16:07:01.336590 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 29 16:07:01.336596 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jan 29 16:07:01.336603 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jan 29 16:07:01.336611 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jan 29 16:07:01.336618 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 29 16:07:01.336624 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 29 16:07:01.336631 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 29 16:07:01.336637 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 29 16:07:01.336643 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 29 16:07:01.336650 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 29 16:07:01.336656 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 29 16:07:01.336663 kernel: psci: probing for conduit method from ACPI. Jan 29 16:07:01.336669 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 16:07:01.336676 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 16:07:01.336682 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 29 16:07:01.336690 kernel: psci: SMC Calling Convention v1.4 Jan 29 16:07:01.336696 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 29 16:07:01.336703 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 29 16:07:01.336709 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 16:07:01.338807 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 16:07:01.338829 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 16:07:01.338837 kernel: Detected PIPT I-cache on CPU0 Jan 29 16:07:01.338844 kernel: CPU features: detected: GIC system register CPU interface Jan 29 16:07:01.338851 kernel: CPU features: detected: Hardware dirty bit management Jan 29 16:07:01.338857 kernel: CPU features: detected: Spectre-BHB Jan 29 16:07:01.338864 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 16:07:01.338876 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 16:07:01.338883 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 16:07:01.338890 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 29 16:07:01.338896 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 16:07:01.338903 kernel: alternatives: applying boot alternatives Jan 29 16:07:01.338911 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=efa7e6e1cc8b13b443d6366d9f999907439b0271fcbeecfeffa01ef11e4dc0ac Jan 29 16:07:01.338918 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:07:01.338925 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 16:07:01.338932 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:07:01.338938 kernel: Fallback order for Node 0: 0 Jan 29 16:07:01.338944 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 29 16:07:01.338952 kernel: Policy zone: Normal Jan 29 16:07:01.338959 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:07:01.338965 kernel: software IO TLB: area num 2. Jan 29 16:07:01.338972 kernel: software IO TLB: mapped [mem 0x0000000036550000-0x000000003a550000] (64MB) Jan 29 16:07:01.338978 kernel: Memory: 3983652K/4194160K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 210508K reserved, 0K cma-reserved) Jan 29 16:07:01.338985 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:07:01.338991 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:07:01.338998 kernel: rcu: RCU event tracing is enabled. Jan 29 16:07:01.339005 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:07:01.339012 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:07:01.339018 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:07:01.339026 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:07:01.339033 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:07:01.339040 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 16:07:01.339046 kernel: GICv3: 960 SPIs implemented Jan 29 16:07:01.339053 kernel: GICv3: 0 Extended SPIs implemented Jan 29 16:07:01.339059 kernel: Root IRQ handler: gic_handle_irq Jan 29 16:07:01.339065 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 16:07:01.339072 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 29 16:07:01.339078 kernel: ITS: No ITS available, not enabling LPIs Jan 29 16:07:01.339085 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:07:01.339092 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 16:07:01.339098 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 16:07:01.339107 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 16:07:01.339113 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 16:07:01.339120 kernel: Console: colour dummy device 80x25 Jan 29 16:07:01.339127 kernel: printk: console [tty1] enabled Jan 29 16:07:01.339134 kernel: ACPI: Core revision 20230628 Jan 29 16:07:01.339141 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 16:07:01.339148 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:07:01.339154 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:07:01.339161 kernel: landlock: Up and running. Jan 29 16:07:01.339169 kernel: SELinux: Initializing. Jan 29 16:07:01.339176 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:07:01.339183 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:07:01.339189 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:07:01.339197 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:07:01.339203 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 29 16:07:01.339210 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 29 16:07:01.339223 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 29 16:07:01.339230 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:07:01.339237 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:07:01.339244 kernel: Remapping and enabling EFI services. Jan 29 16:07:01.339251 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:07:01.339260 kernel: Detected PIPT I-cache on CPU1 Jan 29 16:07:01.339267 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 29 16:07:01.339274 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 16:07:01.339281 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 16:07:01.339288 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:07:01.339296 kernel: SMP: Total of 2 processors activated. Jan 29 16:07:01.339304 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 16:07:01.339311 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 29 16:07:01.339318 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 16:07:01.339325 kernel: CPU features: detected: CRC32 instructions Jan 29 16:07:01.339332 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 16:07:01.339339 kernel: CPU features: detected: LSE atomic instructions Jan 29 16:07:01.339346 kernel: CPU features: detected: Privileged Access Never Jan 29 16:07:01.339353 kernel: CPU: All CPU(s) started at EL1 Jan 29 16:07:01.339361 kernel: alternatives: applying system-wide alternatives Jan 29 16:07:01.339368 kernel: devtmpfs: initialized Jan 29 16:07:01.339375 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:07:01.339383 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:07:01.339390 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:07:01.339396 kernel: SMBIOS 3.1.0 present. Jan 29 16:07:01.339404 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 29 16:07:01.339411 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:07:01.339432 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 16:07:01.339450 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 16:07:01.339457 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 16:07:01.339465 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:07:01.339475 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 29 16:07:01.339483 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:07:01.339493 kernel: cpuidle: using governor menu Jan 29 16:07:01.339500 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 16:07:01.339507 kernel: ASID allocator initialised with 32768 entries Jan 29 16:07:01.339514 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:07:01.339523 kernel: Serial: AMBA PL011 UART driver Jan 29 16:07:01.339530 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 16:07:01.339537 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 16:07:01.339544 kernel: Modules: 509280 pages in range for PLT usage Jan 29 16:07:01.339551 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:07:01.339558 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:07:01.339565 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 16:07:01.339572 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 16:07:01.339579 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:07:01.339588 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:07:01.339595 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 16:07:01.339602 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 16:07:01.339609 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:07:01.339616 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:07:01.339623 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:07:01.339630 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:07:01.339637 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:07:01.339644 kernel: ACPI: Interpreter enabled Jan 29 16:07:01.339658 kernel: ACPI: Using GIC for interrupt routing Jan 29 16:07:01.339667 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 29 16:07:01.339674 kernel: printk: console [ttyAMA0] enabled Jan 29 16:07:01.339681 kernel: printk: bootconsole [pl11] disabled Jan 29 16:07:01.339689 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 29 16:07:01.339695 kernel: iommu: Default domain type: Translated Jan 29 16:07:01.339703 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 16:07:01.339714 kernel: efivars: Registered efivars operations Jan 29 16:07:01.339732 kernel: vgaarb: loaded Jan 29 16:07:01.339743 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 16:07:01.339750 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:07:01.339757 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:07:01.339764 kernel: pnp: PnP ACPI init Jan 29 16:07:01.339771 kernel: pnp: PnP ACPI: found 0 devices Jan 29 16:07:01.339778 kernel: NET: Registered PF_INET protocol family Jan 29 16:07:01.339785 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 16:07:01.339793 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 16:07:01.339800 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:07:01.339808 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:07:01.339816 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 16:07:01.339823 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 16:07:01.339830 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:07:01.339837 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:07:01.339844 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:07:01.339851 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:07:01.339859 kernel: kvm [1]: HYP mode not available Jan 29 16:07:01.339866 kernel: Initialise system trusted keyrings Jan 29 16:07:01.339874 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 16:07:01.339881 kernel: Key type asymmetric registered Jan 29 16:07:01.339888 kernel: Asymmetric key parser 'x509' registered Jan 29 16:07:01.339895 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 16:07:01.339902 kernel: io scheduler mq-deadline registered Jan 29 16:07:01.339909 kernel: io scheduler kyber registered Jan 29 16:07:01.339916 kernel: io scheduler bfq registered Jan 29 16:07:01.339923 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:07:01.339930 kernel: thunder_xcv, ver 1.0 Jan 29 16:07:01.339938 kernel: thunder_bgx, ver 1.0 Jan 29 16:07:01.339945 kernel: nicpf, ver 1.0 Jan 29 16:07:01.339952 kernel: nicvf, ver 1.0 Jan 29 16:07:01.340089 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 16:07:01.340158 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T16:07:00 UTC (1738166820) Jan 29 16:07:01.340168 kernel: efifb: probing for efifb Jan 29 16:07:01.340175 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 29 16:07:01.340183 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 29 16:07:01.340192 kernel: efifb: scrolling: redraw Jan 29 16:07:01.340199 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 29 16:07:01.340206 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 16:07:01.340213 kernel: fb0: EFI VGA frame buffer device Jan 29 16:07:01.340220 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 29 16:07:01.340227 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 16:07:01.340234 kernel: No ACPI PMU IRQ for CPU0 Jan 29 16:07:01.340241 kernel: No ACPI PMU IRQ for CPU1 Jan 29 16:07:01.340248 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 29 16:07:01.340257 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 16:07:01.340264 kernel: watchdog: Hard watchdog permanently disabled Jan 29 16:07:01.340271 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:07:01.340278 kernel: Segment Routing with IPv6 Jan 29 16:07:01.340285 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:07:01.340292 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:07:01.340298 kernel: Key type dns_resolver registered Jan 29 16:07:01.340305 kernel: registered taskstats version 1 Jan 29 16:07:01.340313 kernel: Loading compiled-in X.509 certificates Jan 29 16:07:01.340321 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6aa2640fb67e4af9702410ddab8a5c8b9fc0d77b' Jan 29 16:07:01.340328 kernel: Key type .fscrypt registered Jan 29 16:07:01.340335 kernel: Key type fscrypt-provisioning registered Jan 29 16:07:01.340342 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:07:01.340350 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:07:01.340357 kernel: ima: No architecture policies found Jan 29 16:07:01.340364 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 16:07:01.340371 kernel: clk: Disabling unused clocks Jan 29 16:07:01.340378 kernel: Freeing unused kernel memory: 38336K Jan 29 16:07:01.340386 kernel: Run /init as init process Jan 29 16:07:01.340393 kernel: with arguments: Jan 29 16:07:01.340400 kernel: /init Jan 29 16:07:01.340407 kernel: with environment: Jan 29 16:07:01.340414 kernel: HOME=/ Jan 29 16:07:01.340421 kernel: TERM=linux Jan 29 16:07:01.340428 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:07:01.340436 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:07:01.340448 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:07:01.340456 systemd[1]: Detected virtualization microsoft. Jan 29 16:07:01.340463 systemd[1]: Detected architecture arm64. Jan 29 16:07:01.340470 systemd[1]: Running in initrd. Jan 29 16:07:01.340477 systemd[1]: No hostname configured, using default hostname. Jan 29 16:07:01.340485 systemd[1]: Hostname set to . Jan 29 16:07:01.340493 systemd[1]: Initializing machine ID from random generator. Jan 29 16:07:01.340500 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:07:01.340509 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:07:01.340517 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:07:01.340525 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:07:01.340533 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:07:01.340540 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:07:01.340549 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:07:01.340557 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:07:01.340567 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:07:01.340574 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:07:01.340582 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:07:01.340589 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:07:01.340597 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:07:01.340604 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:07:01.340612 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:07:01.340639 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:07:01.340653 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:07:01.340660 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:07:01.340668 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:07:01.340676 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:07:01.340683 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:07:01.340692 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:07:01.340699 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:07:01.340707 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:07:01.340714 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:07:01.348139 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:07:01.348148 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:07:01.348156 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:07:01.348164 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:07:01.348200 systemd-journald[218]: Collecting audit messages is disabled. Jan 29 16:07:01.348222 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:07:01.348231 systemd-journald[218]: Journal started Jan 29 16:07:01.348249 systemd-journald[218]: Runtime Journal (/run/log/journal/01ac91b23d2c416ca200beb7834468d5) is 8M, max 78.5M, 70.5M free. Jan 29 16:07:01.359769 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:07:01.339697 systemd-modules-load[220]: Inserted module 'overlay' Jan 29 16:07:01.374259 kernel: Bridge firewalling registered Jan 29 16:07:01.374284 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:07:01.368694 systemd-modules-load[220]: Inserted module 'br_netfilter' Jan 29 16:07:01.388052 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:07:01.402673 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:07:01.411867 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:07:01.417703 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:07:01.432188 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:07:01.461033 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:07:01.470887 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:07:01.493710 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:07:01.513049 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:07:01.524093 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:07:01.532971 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:07:01.544031 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:07:01.561124 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:07:01.592887 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:07:01.617313 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:07:01.637112 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:07:01.653834 dracut-cmdline[253]: dracut-dracut-053 Jan 29 16:07:01.660312 dracut-cmdline[253]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=efa7e6e1cc8b13b443d6366d9f999907439b0271fcbeecfeffa01ef11e4dc0ac Jan 29 16:07:01.661776 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:07:01.735310 systemd-resolved[257]: Positive Trust Anchors: Jan 29 16:07:01.735874 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:07:01.735907 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:07:01.738038 systemd-resolved[257]: Defaulting to hostname 'linux'. Jan 29 16:07:01.740108 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:07:01.747375 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:07:01.865753 kernel: SCSI subsystem initialized Jan 29 16:07:01.873743 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:07:01.887976 kernel: iscsi: registered transport (tcp) Jan 29 16:07:01.905684 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:07:01.905758 kernel: QLogic iSCSI HBA Driver Jan 29 16:07:01.943247 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:07:01.958984 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:07:01.994737 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:07:01.994819 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:07:01.994832 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:07:02.049744 kernel: raid6: neonx8 gen() 15770 MB/s Jan 29 16:07:02.069750 kernel: raid6: neonx4 gen() 15827 MB/s Jan 29 16:07:02.089727 kernel: raid6: neonx2 gen() 13223 MB/s Jan 29 16:07:02.110729 kernel: raid6: neonx1 gen() 10539 MB/s Jan 29 16:07:02.130727 kernel: raid6: int64x8 gen() 6795 MB/s Jan 29 16:07:02.150727 kernel: raid6: int64x4 gen() 7359 MB/s Jan 29 16:07:02.171727 kernel: raid6: int64x2 gen() 6114 MB/s Jan 29 16:07:02.195737 kernel: raid6: int64x1 gen() 5061 MB/s Jan 29 16:07:02.195758 kernel: raid6: using algorithm neonx4 gen() 15827 MB/s Jan 29 16:07:02.220527 kernel: raid6: .... xor() 12512 MB/s, rmw enabled Jan 29 16:07:02.220539 kernel: raid6: using neon recovery algorithm Jan 29 16:07:02.231614 kernel: xor: measuring software checksum speed Jan 29 16:07:02.231628 kernel: 8regs : 21658 MB/sec Jan 29 16:07:02.235501 kernel: 32regs : 21653 MB/sec Jan 29 16:07:02.243479 kernel: arm64_neon : 26196 MB/sec Jan 29 16:07:02.243490 kernel: xor: using function: arm64_neon (26196 MB/sec) Jan 29 16:07:02.293732 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:07:02.303061 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:07:02.320865 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:07:02.346948 systemd-udevd[441]: Using default interface naming scheme 'v255'. Jan 29 16:07:02.352299 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:07:02.373852 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:07:02.405985 dracut-pre-trigger[454]: rd.md=0: removing MD RAID activation Jan 29 16:07:02.432578 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:07:02.447903 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:07:02.486303 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:07:02.505923 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:07:02.522461 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:07:02.535466 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:07:02.558544 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:07:02.572033 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:07:02.591216 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:07:02.618707 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:07:02.637761 kernel: hv_vmbus: Vmbus version:5.3 Jan 29 16:07:02.638129 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:07:02.638294 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:07:02.673610 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 29 16:07:02.673635 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 29 16:07:02.673656 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:07:02.704186 kernel: hv_vmbus: registering driver hv_storvsc Jan 29 16:07:02.704208 kernel: scsi host0: storvsc_host_t Jan 29 16:07:02.704258 kernel: scsi host1: storvsc_host_t Jan 29 16:07:02.689155 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:07:02.728105 kernel: hv_vmbus: registering driver hid_hyperv Jan 29 16:07:02.728128 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 29 16:07:02.728138 kernel: hv_vmbus: registering driver hv_netvsc Jan 29 16:07:02.689379 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:07:02.742733 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 29 16:07:02.742788 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 29 16:07:02.749773 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:07:02.785845 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 29 16:07:02.785879 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 29 16:07:02.779108 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:07:02.804836 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:07:02.819766 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 29 16:07:02.814899 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:07:02.851833 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:07:02.889432 kernel: PTP clock support registered Jan 29 16:07:02.889455 kernel: hv_netvsc 002248b8-e43e-0022-48b8-e43e002248b8 eth0: VF slot 1 added Jan 29 16:07:02.889613 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 29 16:07:03.110399 kernel: hv_utils: Registering HyperV Utility Driver Jan 29 16:07:03.110415 kernel: hv_vmbus: registering driver hv_utils Jan 29 16:07:03.110431 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 16:07:03.110440 kernel: hv_utils: Heartbeat IC version 3.0 Jan 29 16:07:03.110449 kernel: hv_utils: Shutdown IC version 3.2 Jan 29 16:07:03.110458 kernel: hv_utils: TimeSync IC version 4.0 Jan 29 16:07:03.110466 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 29 16:07:03.105332 systemd-resolved[257]: Clock change detected. Flushing caches. Jan 29 16:07:03.171432 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 29 16:07:03.200048 kernel: hv_vmbus: registering driver hv_pci Jan 29 16:07:03.200064 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 29 16:07:03.200193 kernel: hv_pci 1787caf6-7497-48c7-8e95-6f1df6bfa416: PCI VMBus probing: Using version 0x10004 Jan 29 16:07:03.240010 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 16:07:03.240161 kernel: hv_pci 1787caf6-7497-48c7-8e95-6f1df6bfa416: PCI host bridge to bus 7497:00 Jan 29 16:07:03.240247 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 29 16:07:03.240329 kernel: pci_bus 7497:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 29 16:07:03.240426 kernel: pci_bus 7497:00: No busn resource found for root bus, will use [bus 00-ff] Jan 29 16:07:03.240499 kernel: pci 7497:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 29 16:07:03.240591 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 29 16:07:03.240675 kernel: pci 7497:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 29 16:07:03.240757 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:07:03.240767 kernel: pci 7497:00:02.0: enabling Extended Tags Jan 29 16:07:03.240846 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 16:07:03.240927 kernel: pci 7497:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 7497:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 29 16:07:03.241009 kernel: pci_bus 7497:00: busn_res: [bus 00-ff] end is updated to 00 Jan 29 16:07:03.241094 kernel: pci 7497:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 29 16:07:03.279564 kernel: mlx5_core 7497:00:02.0: enabling device (0000 -> 0002) Jan 29 16:07:03.506341 kernel: mlx5_core 7497:00:02.0: firmware version: 16.30.1284 Jan 29 16:07:03.506492 kernel: hv_netvsc 002248b8-e43e-0022-48b8-e43e002248b8 eth0: VF registering: eth1 Jan 29 16:07:03.506601 kernel: mlx5_core 7497:00:02.0 eth1: joined to eth0 Jan 29 16:07:03.506692 kernel: mlx5_core 7497:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 29 16:07:03.517129 kernel: mlx5_core 7497:00:02.0 enP29847s1: renamed from eth1 Jan 29 16:07:03.678197 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 29 16:07:03.768966 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 29 16:07:03.785723 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (488) Jan 29 16:07:03.791227 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 16:07:03.813751 kernel: BTRFS: device fsid d7b4a0ef-7a03-4a6c-8f31-7cafae04447a devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (489) Jan 29 16:07:03.832140 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 29 16:07:03.848077 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 29 16:07:03.870238 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:07:03.897183 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:07:03.912116 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:07:04.923111 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:07:04.924166 disk-uuid[605]: The operation has completed successfully. Jan 29 16:07:04.993173 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:07:04.995340 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:07:05.036274 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:07:05.049079 sh[691]: Success Jan 29 16:07:05.077125 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 16:07:05.289886 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:07:05.297135 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:07:05.317231 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:07:05.353238 kernel: BTRFS info (device dm-0): first mount of filesystem d7b4a0ef-7a03-4a6c-8f31-7cafae04447a Jan 29 16:07:05.353300 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:07:05.360391 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:07:05.365634 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:07:05.370394 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:07:05.586873 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:07:05.592243 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:07:05.607283 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:07:05.615299 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:07:05.666163 kernel: BTRFS info (device sda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:07:05.666219 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:07:05.666230 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:07:05.686114 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:07:05.702703 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:07:05.709104 kernel: BTRFS info (device sda6): last unmount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:07:05.716346 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:07:05.734245 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:07:05.740630 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:07:05.758318 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:07:05.797144 systemd-networkd[876]: lo: Link UP Jan 29 16:07:05.797152 systemd-networkd[876]: lo: Gained carrier Jan 29 16:07:05.798958 systemd-networkd[876]: Enumeration completed Jan 29 16:07:05.806146 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:07:05.806708 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:07:05.806712 systemd-networkd[876]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:07:05.815883 systemd[1]: Reached target network.target - Network. Jan 29 16:07:05.896108 kernel: mlx5_core 7497:00:02.0 enP29847s1: Link up Jan 29 16:07:05.940035 systemd-networkd[876]: enP29847s1: Link UP Jan 29 16:07:05.943922 kernel: hv_netvsc 002248b8-e43e-0022-48b8-e43e002248b8 eth0: Data path switched to VF: enP29847s1 Jan 29 16:07:05.940144 systemd-networkd[876]: eth0: Link UP Jan 29 16:07:05.940244 systemd-networkd[876]: eth0: Gained carrier Jan 29 16:07:05.940253 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:07:05.967700 systemd-networkd[876]: enP29847s1: Gained carrier Jan 29 16:07:05.983119 systemd-networkd[876]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 29 16:07:06.542929 ignition[872]: Ignition 2.20.0 Jan 29 16:07:06.542939 ignition[872]: Stage: fetch-offline Jan 29 16:07:06.547896 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:07:06.542973 ignition[872]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:06.542981 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:07:06.543070 ignition[872]: parsed url from cmdline: "" Jan 29 16:07:06.543073 ignition[872]: no config URL provided Jan 29 16:07:06.543078 ignition[872]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:07:06.574232 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:07:06.545768 ignition[872]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:07:06.545782 ignition[872]: failed to fetch config: resource requires networking Jan 29 16:07:06.546172 ignition[872]: Ignition finished successfully Jan 29 16:07:06.598860 ignition[886]: Ignition 2.20.0 Jan 29 16:07:06.598867 ignition[886]: Stage: fetch Jan 29 16:07:06.599170 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:06.599181 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:07:06.599289 ignition[886]: parsed url from cmdline: "" Jan 29 16:07:06.599293 ignition[886]: no config URL provided Jan 29 16:07:06.599298 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:07:06.599309 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:07:06.599338 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 29 16:07:06.779027 ignition[886]: GET result: OK Jan 29 16:07:06.779119 ignition[886]: config has been read from IMDS userdata Jan 29 16:07:06.779158 ignition[886]: parsing config with SHA512: 4a79bbee100db8141c1fc1b0eba229d3257fe74f7b5818bc2b3ccd8f89e67f68b9ff64ee48dfcc73bab19b56f4b95cd96b8f03d8dc885b08a523848d752986d6 Jan 29 16:07:06.783123 unknown[886]: fetched base config from "system" Jan 29 16:07:06.783485 ignition[886]: fetch: fetch complete Jan 29 16:07:06.783131 unknown[886]: fetched base config from "system" Jan 29 16:07:06.783491 ignition[886]: fetch: fetch passed Jan 29 16:07:06.783136 unknown[886]: fetched user config from "azure" Jan 29 16:07:06.783530 ignition[886]: Ignition finished successfully Jan 29 16:07:06.794603 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:07:06.815802 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:07:06.844337 ignition[892]: Ignition 2.20.0 Jan 29 16:07:06.844345 ignition[892]: Stage: kargs Jan 29 16:07:06.851356 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:07:06.844534 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:06.844544 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:07:06.845454 ignition[892]: kargs: kargs passed Jan 29 16:07:06.878626 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:07:06.845499 ignition[892]: Ignition finished successfully Jan 29 16:07:06.892727 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:07:06.890489 ignition[898]: Ignition 2.20.0 Jan 29 16:07:06.901480 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:07:06.890496 ignition[898]: Stage: disks Jan 29 16:07:06.913317 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:07:06.890697 ignition[898]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:06.926191 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:07:06.890717 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:07:06.935301 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:07:06.891718 ignition[898]: disks: disks passed Jan 29 16:07:06.948249 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:07:06.891767 ignition[898]: Ignition finished successfully Jan 29 16:07:06.975307 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:07:07.062683 systemd-fsck[906]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 29 16:07:07.072799 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:07:07.101270 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:07:07.164294 kernel: EXT4-fs (sda9): mounted filesystem 41c89329-6889-4dd8-82a1-efe68f55bab8 r/w with ordered data mode. Quota mode: none. Jan 29 16:07:07.164769 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:07:07.173792 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:07:07.212171 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:07:07.219218 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:07:07.230249 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 16:07:07.270530 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (917) Jan 29 16:07:07.270562 kernel: BTRFS info (device sda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:07:07.270573 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:07:07.247472 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:07:07.294263 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:07:07.247512 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:07:07.282720 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:07:07.312339 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:07:07.331108 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:07:07.331712 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:07:07.409308 systemd-networkd[876]: enP29847s1: Gained IPv6LL Jan 29 16:07:07.473271 systemd-networkd[876]: eth0: Gained IPv6LL Jan 29 16:07:07.754935 coreos-metadata[919]: Jan 29 16:07:07.754 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 16:07:07.764757 coreos-metadata[919]: Jan 29 16:07:07.764 INFO Fetch successful Jan 29 16:07:07.770276 coreos-metadata[919]: Jan 29 16:07:07.770 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 29 16:07:07.796076 coreos-metadata[919]: Jan 29 16:07:07.795 INFO Fetch successful Jan 29 16:07:07.808180 coreos-metadata[919]: Jan 29 16:07:07.808 INFO wrote hostname ci-4230.0.0-a-732fe1e27c to /sysroot/etc/hostname Jan 29 16:07:07.818470 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:07:07.959948 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:07:08.018107 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:07:08.039685 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:07:08.048696 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:07:08.797971 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:07:08.817250 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:07:08.824818 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:07:08.840338 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:07:08.858481 kernel: BTRFS info (device sda6): last unmount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:07:08.877846 ignition[1039]: INFO : Ignition 2.20.0 Jan 29 16:07:08.877846 ignition[1039]: INFO : Stage: mount Jan 29 16:07:08.877846 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:08.877846 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:07:08.877846 ignition[1039]: INFO : mount: mount passed Jan 29 16:07:08.877846 ignition[1039]: INFO : Ignition finished successfully Jan 29 16:07:08.883343 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:07:08.892275 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:07:08.913321 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:07:08.927345 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:07:08.964102 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1050) Jan 29 16:07:08.977829 kernel: BTRFS info (device sda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:07:08.977865 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:07:08.981904 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:07:08.988098 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:07:08.990578 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:07:09.015908 ignition[1068]: INFO : Ignition 2.20.0 Jan 29 16:07:09.015908 ignition[1068]: INFO : Stage: files Jan 29 16:07:09.015908 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:09.015908 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:07:09.038885 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:07:09.038885 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:07:09.038885 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:07:09.094147 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:07:09.108720 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:07:09.117328 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:07:09.108826 unknown[1068]: wrote ssh authorized keys file for user: core Jan 29 16:07:09.132711 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 16:07:09.132711 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 16:07:09.175318 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:07:09.285660 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 16:07:09.285660 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:07:09.285660 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 29 16:07:09.748160 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 16:07:09.827139 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:07:09.838610 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:07:09.838610 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:07:09.838610 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:07:09.838610 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:07:09.838610 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:07:09.838610 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:07:09.838610 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:07:09.838610 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:07:09.838610 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:07:09.838610 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:07:09.838610 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 16:07:09.838610 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 16:07:09.838610 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 16:07:09.838610 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 29 16:07:10.230712 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 16:07:10.477053 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 16:07:10.477053 ignition[1068]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 16:07:10.499248 ignition[1068]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:07:10.499248 ignition[1068]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:07:10.499248 ignition[1068]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 16:07:10.499248 ignition[1068]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:07:10.499248 ignition[1068]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:07:10.499248 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:07:10.499248 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:07:10.499248 ignition[1068]: INFO : files: files passed Jan 29 16:07:10.499248 ignition[1068]: INFO : Ignition finished successfully Jan 29 16:07:10.499743 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:07:10.548789 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:07:10.563243 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:07:10.623180 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:07:10.623180 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:07:10.586837 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:07:10.657805 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:07:10.587473 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:07:10.616417 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:07:10.630357 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:07:10.666303 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:07:10.713565 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:07:10.713694 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:07:10.732927 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:07:10.738553 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:07:10.749679 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:07:10.764570 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:07:10.785209 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:07:10.800321 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:07:10.820340 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:07:10.822110 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:07:10.832343 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:07:10.844896 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:07:10.857209 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:07:10.868037 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:07:10.868113 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:07:10.883941 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:07:10.895612 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:07:10.905948 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:07:10.916734 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:07:10.928560 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:07:10.940587 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:07:10.952025 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:07:10.963901 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:07:10.975994 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:07:10.986438 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:07:10.996072 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:07:10.996164 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:07:11.010844 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:07:11.017050 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:07:11.029629 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:07:11.034740 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:07:11.041918 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:07:11.041991 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:07:11.062051 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:07:11.062121 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:07:11.078366 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:07:11.078416 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:07:11.089345 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 16:07:11.089389 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:07:11.163056 ignition[1121]: INFO : Ignition 2.20.0 Jan 29 16:07:11.163056 ignition[1121]: INFO : Stage: umount Jan 29 16:07:11.163056 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:11.163056 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:07:11.163056 ignition[1121]: INFO : umount: umount passed Jan 29 16:07:11.163056 ignition[1121]: INFO : Ignition finished successfully Jan 29 16:07:11.122286 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:07:11.142710 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:07:11.155188 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:07:11.155280 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:07:11.170930 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:07:11.170993 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:07:11.182564 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:07:11.182656 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:07:11.196936 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:07:11.197288 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:07:11.197331 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:07:11.207128 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:07:11.207200 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:07:11.219021 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:07:11.219095 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:07:11.230717 systemd[1]: Stopped target network.target - Network. Jan 29 16:07:11.241832 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:07:11.241914 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:07:11.254660 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:07:11.259920 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:07:11.265714 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:07:11.273213 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:07:11.283482 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:07:11.295490 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:07:11.295531 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:07:11.306859 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:07:11.306900 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:07:11.318094 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:07:11.318150 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:07:11.328052 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:07:11.328106 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:07:11.338901 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:07:11.351811 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:07:11.363932 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:07:11.364052 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:07:11.381148 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:07:11.381383 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:07:11.383744 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:07:11.396634 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:07:11.617188 kernel: hv_netvsc 002248b8-e43e-0022-48b8-e43e002248b8 eth0: Data path switched from VF: enP29847s1 Jan 29 16:07:11.396862 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:07:11.396966 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:07:11.408009 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:07:11.408070 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:07:11.419270 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:07:11.419345 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:07:11.453307 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:07:11.462573 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:07:11.462666 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:07:11.474589 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:07:11.474659 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:07:11.490310 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:07:11.490371 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:07:11.496285 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:07:11.496337 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:07:11.513321 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:07:11.525073 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:07:11.525165 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:07:11.553215 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:07:11.553360 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:07:11.567719 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:07:11.567775 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:07:11.579917 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:07:11.579952 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:07:11.599620 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:07:11.599679 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:07:11.617242 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:07:11.617325 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:07:11.628869 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:07:11.628936 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:07:11.664314 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:07:11.679237 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:07:11.679315 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:07:11.908892 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jan 29 16:07:11.693306 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 16:07:11.693364 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:07:11.709000 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:07:11.709071 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:07:11.723408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:07:11.723469 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:07:11.743477 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:07:11.743553 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:07:11.743872 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:07:11.743957 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:07:11.754145 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:07:11.754229 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:07:11.766371 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:07:11.794622 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:07:11.817892 systemd[1]: Switching root. Jan 29 16:07:12.002542 systemd-journald[218]: Journal stopped Jan 29 16:07:16.489933 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:07:16.489957 kernel: SELinux: policy capability open_perms=1 Jan 29 16:07:16.489966 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:07:16.489978 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:07:16.489998 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:07:16.490006 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:07:16.490015 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:07:16.490023 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:07:16.490031 kernel: audit: type=1403 audit(1738166833.151:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:07:16.490040 systemd[1]: Successfully loaded SELinux policy in 142.462ms. Jan 29 16:07:16.490055 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.649ms. Jan 29 16:07:16.490065 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:07:16.490073 systemd[1]: Detected virtualization microsoft. Jan 29 16:07:16.490092 systemd[1]: Detected architecture arm64. Jan 29 16:07:16.490103 systemd[1]: Detected first boot. Jan 29 16:07:16.490114 systemd[1]: Hostname set to . Jan 29 16:07:16.490123 systemd[1]: Initializing machine ID from random generator. Jan 29 16:07:16.490132 zram_generator::config[1163]: No configuration found. Jan 29 16:07:16.490141 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:07:16.490149 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:07:16.490158 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:07:16.490167 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:07:16.490177 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:07:16.490186 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:07:16.490195 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:07:16.490204 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:07:16.490212 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:07:16.490221 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:07:16.490230 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:07:16.490241 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:07:16.490250 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:07:16.490259 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:07:16.490268 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:07:16.490277 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:07:16.490287 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:07:16.490295 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:07:16.490304 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:07:16.490315 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:07:16.490323 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 16:07:16.490332 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:07:16.490343 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:07:16.490352 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:07:16.490361 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:07:16.490370 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:07:16.490379 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:07:16.490390 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:07:16.490398 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:07:16.490407 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:07:16.490416 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:07:16.490425 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:07:16.490435 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:07:16.490446 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:07:16.490455 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:07:16.490465 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:07:16.490474 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:07:16.490483 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:07:16.490492 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:07:16.490501 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:07:16.490512 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:07:16.490521 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:07:16.490530 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:07:16.490540 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:07:16.490550 systemd[1]: Reached target machines.target - Containers. Jan 29 16:07:16.490559 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:07:16.490568 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:07:16.490578 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:07:16.490588 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:07:16.490598 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:07:16.490607 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:07:16.490616 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:07:16.490625 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:07:16.490634 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:07:16.490643 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:07:16.490653 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:07:16.490664 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:07:16.490673 kernel: fuse: init (API version 7.39) Jan 29 16:07:16.490681 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:07:16.490690 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:07:16.490699 kernel: loop: module loaded Jan 29 16:07:16.490711 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:07:16.490722 kernel: ACPI: bus type drm_connector registered Jan 29 16:07:16.490730 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:07:16.490739 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:07:16.490766 systemd-journald[1267]: Collecting audit messages is disabled. Jan 29 16:07:16.490786 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:07:16.490797 systemd-journald[1267]: Journal started Jan 29 16:07:16.490818 systemd-journald[1267]: Runtime Journal (/run/log/journal/29e20214855247639fc25dc1a7b7735b) is 8M, max 78.5M, 70.5M free. Jan 29 16:07:15.591638 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:07:15.598230 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 16:07:15.598601 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:07:15.598933 systemd[1]: systemd-journald.service: Consumed 3.429s CPU time. Jan 29 16:07:16.532099 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:07:16.548575 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:07:16.564757 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:07:16.574460 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:07:16.574529 systemd[1]: Stopped verity-setup.service. Jan 29 16:07:16.595717 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:07:16.593552 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:07:16.600117 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:07:16.606244 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:07:16.612619 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:07:16.618915 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:07:16.625682 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:07:16.633127 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:07:16.639962 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:07:16.649622 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:07:16.649792 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:07:16.656679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:07:16.656840 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:07:16.663559 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:07:16.663711 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:07:16.669616 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:07:16.669762 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:07:16.676852 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:07:16.676999 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:07:16.683344 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:07:16.683496 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:07:16.689841 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:07:16.696073 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:07:16.703326 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:07:16.711502 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:07:16.718668 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:07:16.736261 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:07:16.749161 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:07:16.758055 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:07:16.764586 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:07:16.764625 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:07:16.772916 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:07:16.791268 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:07:16.800112 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:07:16.805812 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:07:16.809279 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:07:16.822343 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:07:16.830169 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:07:16.831273 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:07:16.838420 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:07:16.839496 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:07:16.849316 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:07:16.860215 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:07:16.878688 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:07:16.889199 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:07:16.896554 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:07:16.911663 systemd-journald[1267]: Time spent on flushing to /var/log/journal/29e20214855247639fc25dc1a7b7735b is 21.794ms for 916 entries. Jan 29 16:07:16.911663 systemd-journald[1267]: System Journal (/var/log/journal/29e20214855247639fc25dc1a7b7735b) is 8M, max 2.6G, 2.6G free. Jan 29 16:07:17.004417 kernel: loop0: detected capacity change from 0 to 28720 Jan 29 16:07:17.004464 systemd-journald[1267]: Received client request to flush runtime journal. Jan 29 16:07:16.907831 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:07:16.933241 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:07:16.940764 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:07:16.948448 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:07:16.962610 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:07:16.971140 udevadm[1306]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 16:07:16.982002 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Jan 29 16:07:16.982013 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Jan 29 16:07:16.986520 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:07:17.005231 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:07:17.015838 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:07:17.048938 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:07:17.050616 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:07:17.097702 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:07:17.111408 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:07:17.131306 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Jan 29 16:07:17.131607 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Jan 29 16:07:17.135418 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:07:17.253109 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:07:17.380114 kernel: loop1: detected capacity change from 0 to 113512 Jan 29 16:07:17.672422 kernel: loop2: detected capacity change from 0 to 194096 Jan 29 16:07:17.725154 kernel: loop3: detected capacity change from 0 to 123192 Jan 29 16:07:18.012869 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:07:18.027255 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:07:18.054890 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Jan 29 16:07:18.067119 kernel: loop4: detected capacity change from 0 to 28720 Jan 29 16:07:18.083175 kernel: loop5: detected capacity change from 0 to 113512 Jan 29 16:07:18.094266 kernel: loop6: detected capacity change from 0 to 194096 Jan 29 16:07:18.106171 kernel: loop7: detected capacity change from 0 to 123192 Jan 29 16:07:18.112343 (sd-merge)[1333]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 29 16:07:18.112768 (sd-merge)[1333]: Merged extensions into '/usr'. Jan 29 16:07:18.116405 systemd[1]: Reload requested from client PID 1303 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:07:18.116511 systemd[1]: Reloading... Jan 29 16:07:18.219378 zram_generator::config[1370]: No configuration found. Jan 29 16:07:18.306395 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:07:18.419101 kernel: hv_vmbus: registering driver hv_balloon Jan 29 16:07:18.419192 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 29 16:07:18.417194 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:07:18.431063 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 29 16:07:18.468117 kernel: hv_vmbus: registering driver hyperv_fb Jan 29 16:07:18.468222 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1362) Jan 29 16:07:18.468250 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 29 16:07:18.476115 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 29 16:07:18.493107 kernel: Console: switching to colour dummy device 80x25 Jan 29 16:07:18.493199 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 16:07:18.535286 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 16:07:18.535348 systemd[1]: Reloading finished in 418 ms. Jan 29 16:07:18.547952 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:07:18.560808 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:07:18.611467 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 16:07:18.624377 systemd[1]: Starting ensure-sysext.service... Jan 29 16:07:18.630121 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:07:18.639759 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:07:18.648624 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:07:18.657895 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:07:18.675215 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:07:18.692332 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:07:18.699340 systemd[1]: Reload requested from client PID 1514 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:07:18.699360 systemd[1]: Reloading... Jan 29 16:07:18.740666 systemd-tmpfiles[1517]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:07:18.740880 systemd-tmpfiles[1517]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:07:18.741571 systemd-tmpfiles[1517]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:07:18.741793 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Jan 29 16:07:18.741837 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Jan 29 16:07:18.773507 systemd-tmpfiles[1517]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:07:18.774277 systemd-tmpfiles[1517]: Skipping /boot Jan 29 16:07:18.780297 zram_generator::config[1558]: No configuration found. Jan 29 16:07:18.788966 systemd-tmpfiles[1517]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:07:18.789079 systemd-tmpfiles[1517]: Skipping /boot Jan 29 16:07:18.791113 lvm[1522]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:07:18.886759 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:07:18.981479 systemd[1]: Reloading finished in 281 ms. Jan 29 16:07:19.002973 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:07:19.010510 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:07:19.017823 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:07:19.024596 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:07:19.039568 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:07:19.059324 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:07:19.066542 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:07:19.076379 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:07:19.086788 lvm[1619]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:07:19.088269 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:07:19.102358 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:07:19.111606 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:07:19.128169 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:07:19.136160 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:07:19.152580 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:07:19.157452 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:07:19.168199 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:07:19.180041 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:07:19.188510 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:07:19.188686 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:07:19.190575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:07:19.190820 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:07:19.198565 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:07:19.198800 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:07:19.212055 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:07:19.224361 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:07:19.232184 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:07:19.232351 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:07:19.251464 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:07:19.263708 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:07:19.268463 augenrules[1658]: No rules Jan 29 16:07:19.271405 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:07:19.280357 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:07:19.297358 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:07:19.303571 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:07:19.303696 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:07:19.304929 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:07:19.305181 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:07:19.314032 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:07:19.314358 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:07:19.321232 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:07:19.321383 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:07:19.332337 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:07:19.333143 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:07:19.350323 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:07:19.358581 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:07:19.359274 systemd-resolved[1622]: Positive Trust Anchors: Jan 29 16:07:19.359285 systemd-resolved[1622]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:07:19.359315 systemd-resolved[1622]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:07:19.371330 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:07:19.376712 systemd-networkd[1516]: lo: Link UP Jan 29 16:07:19.376721 systemd-networkd[1516]: lo: Gained carrier Jan 29 16:07:19.380427 augenrules[1671]: /sbin/augenrules: No change Jan 29 16:07:19.381717 systemd-networkd[1516]: Enumeration completed Jan 29 16:07:19.382833 systemd-networkd[1516]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:07:19.382911 systemd-networkd[1516]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:07:19.385411 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:07:19.388503 augenrules[1691]: No rules Jan 29 16:07:19.395374 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:07:19.405348 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:07:19.410980 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:07:19.411251 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:07:19.411411 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:07:19.418053 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:07:19.419724 systemd-resolved[1622]: Using system hostname 'ci-4230.0.0-a-732fe1e27c'. Jan 29 16:07:19.424758 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:07:19.424985 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:07:19.431026 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:07:19.431207 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:07:19.438258 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:07:19.438411 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:07:19.449145 kernel: mlx5_core 7497:00:02.0 enP29847s1: Link up Jan 29 16:07:19.449518 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:07:19.450513 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:07:19.458756 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:07:19.458907 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:07:19.475185 kernel: hv_netvsc 002248b8-e43e-0022-48b8-e43e002248b8 eth0: Data path switched to VF: enP29847s1 Jan 29 16:07:19.469529 systemd[1]: Finished ensure-sysext.service. Jan 29 16:07:19.476951 systemd-networkd[1516]: enP29847s1: Link UP Jan 29 16:07:19.477058 systemd-networkd[1516]: eth0: Link UP Jan 29 16:07:19.477062 systemd-networkd[1516]: eth0: Gained carrier Jan 29 16:07:19.477076 systemd-networkd[1516]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:07:19.482201 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:07:19.491382 systemd-networkd[1516]: enP29847s1: Gained carrier Jan 29 16:07:19.492347 systemd[1]: Reached target network.target - Network. Jan 29 16:07:19.497500 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:07:19.510148 systemd-networkd[1516]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 29 16:07:19.512231 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:07:19.519860 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:07:19.526559 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:07:19.526636 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:07:19.562784 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:07:19.695940 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:07:19.703462 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:07:20.721291 systemd-networkd[1516]: enP29847s1: Gained IPv6LL Jan 29 16:07:21.425268 systemd-networkd[1516]: eth0: Gained IPv6LL Jan 29 16:07:21.427330 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:07:21.435550 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:07:24.416220 ldconfig[1298]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:07:24.461557 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:07:24.475372 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:07:24.482525 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:07:24.488622 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:07:24.494646 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:07:24.501580 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:07:24.508452 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:07:24.514496 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:07:24.521165 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:07:24.528290 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:07:24.528321 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:07:24.533122 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:07:24.539054 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:07:24.547942 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:07:24.555260 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:07:24.562314 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:07:24.568922 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:07:24.582972 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:07:24.589177 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:07:24.595895 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:07:24.601632 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:07:24.606742 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:07:24.611699 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:07:24.611729 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:07:24.617177 systemd[1]: Starting chronyd.service - NTP client/server... Jan 29 16:07:24.624215 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:07:24.636276 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:07:24.652491 (chronyd)[1714]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 29 16:07:24.655909 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:07:24.662237 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:07:24.669285 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:07:24.674658 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:07:24.674777 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 29 16:07:24.675886 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 29 16:07:24.681633 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 29 16:07:24.682729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:07:24.691252 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:07:24.700327 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:07:24.706123 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:07:24.713404 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:07:24.723250 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:07:24.732270 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:07:24.738604 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:07:24.739021 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:07:24.741262 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:07:24.749205 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:07:24.759662 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:07:24.759999 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:07:24.792370 KVP[1723]: KVP starting; pid is:1723 Jan 29 16:07:24.797133 KVP[1723]: KVP LIC Version: 3.1 Jan 29 16:07:24.798112 kernel: hv_utils: KVP IC version 4.0 Jan 29 16:07:24.798312 chronyd[1742]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 29 16:07:24.862692 chronyd[1742]: Timezone right/UTC failed leap second check, ignoring Jan 29 16:07:24.862846 chronyd[1742]: Loaded seccomp filter (level 2) Jan 29 16:07:24.870195 jq[1721]: false Jan 29 16:07:24.865103 systemd[1]: Started chronyd.service - NTP client/server. Jan 29 16:07:24.870944 jq[1733]: true Jan 29 16:07:24.874145 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:07:24.876139 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:07:24.887340 tar[1736]: linux-arm64/helm Jan 29 16:07:24.889851 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:07:24.893457 jq[1753]: true Jan 29 16:07:24.891766 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:07:24.892474 (ntainerd)[1755]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:07:24.923707 extend-filesystems[1722]: Found loop4 Jan 29 16:07:24.945171 update_engine[1732]: I20250129 16:07:24.925585 1732 main.cc:92] Flatcar Update Engine starting Jan 29 16:07:24.928665 systemd-logind[1731]: New seat seat0. Jan 29 16:07:24.945646 extend-filesystems[1722]: Found loop5 Jan 29 16:07:24.945646 extend-filesystems[1722]: Found loop6 Jan 29 16:07:24.945646 extend-filesystems[1722]: Found loop7 Jan 29 16:07:24.945646 extend-filesystems[1722]: Found sda Jan 29 16:07:24.945646 extend-filesystems[1722]: Found sda1 Jan 29 16:07:24.945646 extend-filesystems[1722]: Found sda2 Jan 29 16:07:24.945646 extend-filesystems[1722]: Found sda3 Jan 29 16:07:24.945646 extend-filesystems[1722]: Found usr Jan 29 16:07:24.945646 extend-filesystems[1722]: Found sda4 Jan 29 16:07:24.945646 extend-filesystems[1722]: Found sda6 Jan 29 16:07:24.945646 extend-filesystems[1722]: Found sda7 Jan 29 16:07:24.945646 extend-filesystems[1722]: Found sda9 Jan 29 16:07:24.945646 extend-filesystems[1722]: Checking size of /dev/sda9 Jan 29 16:07:24.929508 systemd-logind[1731]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:07:24.929694 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:07:24.948126 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:07:25.287703 dbus-daemon[1717]: [system] SELinux support is enabled Jan 29 16:07:25.287887 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:07:25.296932 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:07:25.297767 dbus-daemon[1717]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 16:07:25.297995 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:07:25.306510 update_engine[1732]: I20250129 16:07:25.306463 1732 update_check_scheduler.cc:74] Next update check in 4m41s Jan 29 16:07:25.308450 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:07:25.308472 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:07:25.320942 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:07:25.337465 tar[1736]: linux-arm64/LICENSE Jan 29 16:07:25.337465 tar[1736]: linux-arm64/README.md Jan 29 16:07:25.337608 extend-filesystems[1722]: Old size kept for /dev/sda9 Jan 29 16:07:25.337608 extend-filesystems[1722]: Found sr0 Jan 29 16:07:25.442316 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1789) Jan 29 16:07:25.342770 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:07:25.987024 coreos-metadata[1716]: Jan 29 16:07:25.434 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 16:07:25.987024 coreos-metadata[1716]: Jan 29 16:07:25.449 INFO Fetch successful Jan 29 16:07:25.987024 coreos-metadata[1716]: Jan 29 16:07:25.449 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 29 16:07:25.987024 coreos-metadata[1716]: Jan 29 16:07:25.455 INFO Fetch successful Jan 29 16:07:25.987024 coreos-metadata[1716]: Jan 29 16:07:25.456 INFO Fetching http://168.63.129.16/machine/eae4868d-7f3c-4b88-82ab-24803eb2e58d/453a92b1%2Dd010%2D4e38%2Da9ae%2D354a438c5b37.%5Fci%2D4230.0.0%2Da%2D732fe1e27c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 29 16:07:25.987024 coreos-metadata[1716]: Jan 29 16:07:25.458 INFO Fetch successful Jan 29 16:07:25.987024 coreos-metadata[1716]: Jan 29 16:07:25.459 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 29 16:07:25.987024 coreos-metadata[1716]: Jan 29 16:07:25.476 INFO Fetch successful Jan 29 16:07:25.355271 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:07:25.358292 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:07:25.515713 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:07:25.524711 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:07:25.988233 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:07:26.328729 locksmithd[1799]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:07:27.094956 containerd[1755]: time="2025-01-29T16:07:27.057631140Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:07:27.099482 containerd[1755]: time="2025-01-29T16:07:27.099412500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:07:27.100114 bash[1781]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:07:27.101455 containerd[1755]: time="2025-01-29T16:07:27.101412460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:07:27.101544 containerd[1755]: time="2025-01-29T16:07:27.101530380Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:07:27.101607 containerd[1755]: time="2025-01-29T16:07:27.101594460Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:07:27.101793 containerd[1755]: time="2025-01-29T16:07:27.101776140Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:07:27.101895 containerd[1755]: time="2025-01-29T16:07:27.101881060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:07:27.102014 containerd[1755]: time="2025-01-29T16:07:27.101997180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:07:27.102071 containerd[1755]: time="2025-01-29T16:07:27.102059820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:07:27.102365 containerd[1755]: time="2025-01-29T16:07:27.102346620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:07:27.102744 containerd[1755]: time="2025-01-29T16:07:27.102416820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:07:27.102744 containerd[1755]: time="2025-01-29T16:07:27.102435660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:07:27.102744 containerd[1755]: time="2025-01-29T16:07:27.102445060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:07:27.102744 containerd[1755]: time="2025-01-29T16:07:27.102530220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:07:27.102744 containerd[1755]: time="2025-01-29T16:07:27.102715380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:07:27.103007 containerd[1755]: time="2025-01-29T16:07:27.102990100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:07:27.103070 containerd[1755]: time="2025-01-29T16:07:27.103057380Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:07:27.103597 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:07:27.111251 containerd[1755]: time="2025-01-29T16:07:27.110804740Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:07:27.111251 containerd[1755]: time="2025-01-29T16:07:27.111037020Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:07:27.122818 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 16:07:27.126108 containerd[1755]: time="2025-01-29T16:07:27.125614820Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:07:27.126108 containerd[1755]: time="2025-01-29T16:07:27.125670700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:07:27.126108 containerd[1755]: time="2025-01-29T16:07:27.125686380Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:07:27.126108 containerd[1755]: time="2025-01-29T16:07:27.125702900Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:07:27.126108 containerd[1755]: time="2025-01-29T16:07:27.125724100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:07:27.126108 containerd[1755]: time="2025-01-29T16:07:27.125869420Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:07:27.126364 containerd[1755]: time="2025-01-29T16:07:27.126346060Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:07:27.126517 containerd[1755]: time="2025-01-29T16:07:27.126501420Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:07:27.126578 containerd[1755]: time="2025-01-29T16:07:27.126566420Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:07:27.126640 containerd[1755]: time="2025-01-29T16:07:27.126627580Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:07:27.126699 containerd[1755]: time="2025-01-29T16:07:27.126686820Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:07:27.126759 containerd[1755]: time="2025-01-29T16:07:27.126746780Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:07:27.127196 containerd[1755]: time="2025-01-29T16:07:27.126802700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:07:27.127196 containerd[1755]: time="2025-01-29T16:07:27.126823340Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:07:27.127196 containerd[1755]: time="2025-01-29T16:07:27.126838220Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:07:27.127196 containerd[1755]: time="2025-01-29T16:07:27.126850780Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:07:27.127196 containerd[1755]: time="2025-01-29T16:07:27.126862860Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:07:27.127196 containerd[1755]: time="2025-01-29T16:07:27.126874340Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:07:27.127196 containerd[1755]: time="2025-01-29T16:07:27.126894140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:07:27.127196 containerd[1755]: time="2025-01-29T16:07:27.126911540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:07:27.127196 containerd[1755]: time="2025-01-29T16:07:27.126923140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:07:27.127196 containerd[1755]: time="2025-01-29T16:07:27.126935500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:07:27.127196 containerd[1755]: time="2025-01-29T16:07:27.126946780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:07:27.127196 containerd[1755]: time="2025-01-29T16:07:27.126959220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:07:27.127196 containerd[1755]: time="2025-01-29T16:07:27.126970220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:07:27.127196 containerd[1755]: time="2025-01-29T16:07:27.126982500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:07:27.127478 containerd[1755]: time="2025-01-29T16:07:27.126993700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:07:27.127478 containerd[1755]: time="2025-01-29T16:07:27.127009420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:07:27.127478 containerd[1755]: time="2025-01-29T16:07:27.127021540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:07:27.127478 containerd[1755]: time="2025-01-29T16:07:27.127033260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:07:27.127478 containerd[1755]: time="2025-01-29T16:07:27.127044700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:07:27.127478 containerd[1755]: time="2025-01-29T16:07:27.127058180Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:07:27.127478 containerd[1755]: time="2025-01-29T16:07:27.127078220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:07:27.127478 containerd[1755]: time="2025-01-29T16:07:27.127115980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:07:27.127478 containerd[1755]: time="2025-01-29T16:07:27.127135060Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:07:27.128849 containerd[1755]: time="2025-01-29T16:07:27.127727940Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:07:27.128849 containerd[1755]: time="2025-01-29T16:07:27.127759900Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:07:27.128849 containerd[1755]: time="2025-01-29T16:07:27.127849620Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:07:27.128849 containerd[1755]: time="2025-01-29T16:07:27.127865740Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:07:27.128849 containerd[1755]: time="2025-01-29T16:07:27.127876580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:07:27.128849 containerd[1755]: time="2025-01-29T16:07:27.127889700Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:07:27.128849 containerd[1755]: time="2025-01-29T16:07:27.127898780Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:07:27.128849 containerd[1755]: time="2025-01-29T16:07:27.127908700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:07:27.129900 containerd[1755]: time="2025-01-29T16:07:27.129349260Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:07:27.129900 containerd[1755]: time="2025-01-29T16:07:27.129410220Z" level=info msg="Connect containerd service" Jan 29 16:07:27.129900 containerd[1755]: time="2025-01-29T16:07:27.129448820Z" level=info msg="using legacy CRI server" Jan 29 16:07:27.129900 containerd[1755]: time="2025-01-29T16:07:27.129455300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:07:27.129900 containerd[1755]: time="2025-01-29T16:07:27.129563820Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:07:27.130318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:07:27.136978 (kubelet)[1878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:07:27.140227 containerd[1755]: time="2025-01-29T16:07:27.139690900Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:07:27.143270 containerd[1755]: time="2025-01-29T16:07:27.142443980Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:07:27.143270 containerd[1755]: time="2025-01-29T16:07:27.142502500Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:07:27.146218 containerd[1755]: time="2025-01-29T16:07:27.146173460Z" level=info msg="Start subscribing containerd event" Jan 29 16:07:27.146329 containerd[1755]: time="2025-01-29T16:07:27.146315580Z" level=info msg="Start recovering state" Jan 29 16:07:27.146463 containerd[1755]: time="2025-01-29T16:07:27.146450220Z" level=info msg="Start event monitor" Jan 29 16:07:27.146520 containerd[1755]: time="2025-01-29T16:07:27.146509740Z" level=info msg="Start snapshots syncer" Jan 29 16:07:27.146571 containerd[1755]: time="2025-01-29T16:07:27.146560500Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:07:27.146614 containerd[1755]: time="2025-01-29T16:07:27.146604340Z" level=info msg="Start streaming server" Jan 29 16:07:27.146785 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:07:27.152635 containerd[1755]: time="2025-01-29T16:07:27.152616980Z" level=info msg="containerd successfully booted in 0.292501s" Jan 29 16:07:27.582020 kubelet[1878]: E0129 16:07:27.581978 1878 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:07:27.584600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:07:27.585247 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:07:27.586169 systemd[1]: kubelet.service: Consumed 691ms CPU time, 241.4M memory peak. Jan 29 16:07:27.809702 sshd_keygen[1763]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:07:27.826876 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:07:27.840740 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:07:27.846938 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 29 16:07:27.854292 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:07:27.856069 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:07:27.875500 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:07:27.884255 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 29 16:07:27.895264 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:07:27.909576 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:07:27.916506 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 16:07:27.922907 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:07:27.927905 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:07:27.937151 systemd[1]: Startup finished in 683ms (kernel) + 12.052s (initrd) + 14.926s (userspace) = 27.663s. Jan 29 16:07:28.597826 login[1909]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:07:28.599145 login[1910]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:07:28.604710 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:07:28.608285 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:07:28.617506 systemd-logind[1731]: New session 2 of user core. Jan 29 16:07:28.620804 systemd-logind[1731]: New session 1 of user core. Jan 29 16:07:28.626310 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:07:28.631320 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:07:28.700723 (systemd)[1917]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:07:28.703382 systemd-logind[1731]: New session c1 of user core. Jan 29 16:07:29.026288 systemd[1917]: Queued start job for default target default.target. Jan 29 16:07:29.034031 systemd[1917]: Created slice app.slice - User Application Slice. Jan 29 16:07:29.034060 systemd[1917]: Reached target paths.target - Paths. Jan 29 16:07:29.034115 systemd[1917]: Reached target timers.target - Timers. Jan 29 16:07:29.035285 systemd[1917]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:07:29.044254 systemd[1917]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:07:29.044308 systemd[1917]: Reached target sockets.target - Sockets. Jan 29 16:07:29.044345 systemd[1917]: Reached target basic.target - Basic System. Jan 29 16:07:29.044376 systemd[1917]: Reached target default.target - Main User Target. Jan 29 16:07:29.044403 systemd[1917]: Startup finished in 335ms. Jan 29 16:07:29.044611 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:07:29.047109 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:07:29.047748 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:07:33.517008 waagent[1906]: 2025-01-29T16:07:33.516904Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 29 16:07:33.522546 waagent[1906]: 2025-01-29T16:07:33.522491Z INFO Daemon Daemon OS: flatcar 4230.0.0 Jan 29 16:07:33.527202 waagent[1906]: 2025-01-29T16:07:33.527149Z INFO Daemon Daemon Python: 3.11.11 Jan 29 16:07:33.531575 waagent[1906]: 2025-01-29T16:07:33.531521Z INFO Daemon Daemon Run daemon Jan 29 16:07:33.535782 waagent[1906]: 2025-01-29T16:07:33.535735Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.0.0' Jan 29 16:07:33.545219 waagent[1906]: 2025-01-29T16:07:33.545164Z INFO Daemon Daemon Using waagent for provisioning Jan 29 16:07:33.550643 waagent[1906]: 2025-01-29T16:07:33.550596Z INFO Daemon Daemon Activate resource disk Jan 29 16:07:33.555269 waagent[1906]: 2025-01-29T16:07:33.555220Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 29 16:07:33.568218 waagent[1906]: 2025-01-29T16:07:33.568158Z INFO Daemon Daemon Found device: None Jan 29 16:07:33.573014 waagent[1906]: 2025-01-29T16:07:33.572968Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 29 16:07:33.581909 waagent[1906]: 2025-01-29T16:07:33.581856Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 29 16:07:33.593675 waagent[1906]: 2025-01-29T16:07:33.593622Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 29 16:07:33.599458 waagent[1906]: 2025-01-29T16:07:33.599410Z INFO Daemon Daemon Running default provisioning handler Jan 29 16:07:33.611191 waagent[1906]: 2025-01-29T16:07:33.610636Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 29 16:07:33.625183 waagent[1906]: 2025-01-29T16:07:33.625113Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 29 16:07:33.634800 waagent[1906]: 2025-01-29T16:07:33.634749Z INFO Daemon Daemon cloud-init is enabled: False Jan 29 16:07:33.640434 waagent[1906]: 2025-01-29T16:07:33.640385Z INFO Daemon Daemon Copying ovf-env.xml Jan 29 16:07:33.910080 waagent[1906]: 2025-01-29T16:07:33.908647Z INFO Daemon Daemon Successfully mounted dvd Jan 29 16:07:33.963834 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 29 16:07:33.966114 waagent[1906]: 2025-01-29T16:07:33.965638Z INFO Daemon Daemon Detect protocol endpoint Jan 29 16:07:33.970707 waagent[1906]: 2025-01-29T16:07:33.970651Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 29 16:07:33.976655 waagent[1906]: 2025-01-29T16:07:33.976599Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 29 16:07:33.983379 waagent[1906]: 2025-01-29T16:07:33.983323Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 29 16:07:33.989262 waagent[1906]: 2025-01-29T16:07:33.989204Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 29 16:07:33.994443 waagent[1906]: 2025-01-29T16:07:33.994390Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 29 16:07:34.226093 waagent[1906]: 2025-01-29T16:07:34.225968Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 29 16:07:34.232652 waagent[1906]: 2025-01-29T16:07:34.232621Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 29 16:07:34.237950 waagent[1906]: 2025-01-29T16:07:34.237900Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 29 16:07:34.544190 waagent[1906]: 2025-01-29T16:07:34.544030Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 29 16:07:34.550390 waagent[1906]: 2025-01-29T16:07:34.550330Z INFO Daemon Daemon Forcing an update of the goal state. Jan 29 16:07:34.559897 waagent[1906]: 2025-01-29T16:07:34.559844Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 29 16:07:34.579436 waagent[1906]: 2025-01-29T16:07:34.579381Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 29 16:07:34.585347 waagent[1906]: 2025-01-29T16:07:34.585292Z INFO Daemon Jan 29 16:07:34.588368 waagent[1906]: 2025-01-29T16:07:34.588316Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 20d01bc6-b960-4340-a5d0-7bec8a0ca1af eTag: 6534944789713476394 source: Fabric] Jan 29 16:07:34.599517 waagent[1906]: 2025-01-29T16:07:34.599465Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 29 16:07:34.606513 waagent[1906]: 2025-01-29T16:07:34.606460Z INFO Daemon Jan 29 16:07:34.609277 waagent[1906]: 2025-01-29T16:07:34.609225Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 29 16:07:34.619863 waagent[1906]: 2025-01-29T16:07:34.619817Z INFO Daemon Daemon Downloading artifacts profile blob Jan 29 16:07:34.719178 waagent[1906]: 2025-01-29T16:07:34.719070Z INFO Daemon Downloaded certificate {'thumbprint': '40C137E0AE397FF4C318F7F20C5BB3906F86D8F2', 'hasPrivateKey': True} Jan 29 16:07:34.729159 waagent[1906]: 2025-01-29T16:07:34.729076Z INFO Daemon Downloaded certificate {'thumbprint': '2AB93586D0DCF5D34AF67BE27C3D9C63D82D53B1', 'hasPrivateKey': False} Jan 29 16:07:34.738730 waagent[1906]: 2025-01-29T16:07:34.738677Z INFO Daemon Fetch goal state completed Jan 29 16:07:34.750685 waagent[1906]: 2025-01-29T16:07:34.750634Z INFO Daemon Daemon Starting provisioning Jan 29 16:07:34.755792 waagent[1906]: 2025-01-29T16:07:34.755741Z INFO Daemon Daemon Handle ovf-env.xml. Jan 29 16:07:34.760952 waagent[1906]: 2025-01-29T16:07:34.760893Z INFO Daemon Daemon Set hostname [ci-4230.0.0-a-732fe1e27c] Jan 29 16:07:34.904106 waagent[1906]: 2025-01-29T16:07:34.901920Z INFO Daemon Daemon Publish hostname [ci-4230.0.0-a-732fe1e27c] Jan 29 16:07:34.908361 waagent[1906]: 2025-01-29T16:07:34.908296Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 29 16:07:34.914613 waagent[1906]: 2025-01-29T16:07:34.914561Z INFO Daemon Daemon Primary interface is [eth0] Jan 29 16:07:34.926654 systemd-networkd[1516]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:07:34.926672 systemd-networkd[1516]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:07:34.926699 systemd-networkd[1516]: eth0: DHCP lease lost Jan 29 16:07:34.927708 waagent[1906]: 2025-01-29T16:07:34.927646Z INFO Daemon Daemon Create user account if not exists Jan 29 16:07:34.933135 waagent[1906]: 2025-01-29T16:07:34.933067Z INFO Daemon Daemon User core already exists, skip useradd Jan 29 16:07:34.938829 waagent[1906]: 2025-01-29T16:07:34.938776Z INFO Daemon Daemon Configure sudoer Jan 29 16:07:34.943515 waagent[1906]: 2025-01-29T16:07:34.943457Z INFO Daemon Daemon Configure sshd Jan 29 16:07:34.947886 waagent[1906]: 2025-01-29T16:07:34.947827Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 29 16:07:34.960534 waagent[1906]: 2025-01-29T16:07:34.960466Z INFO Daemon Daemon Deploy ssh public key. Jan 29 16:07:34.974143 systemd-networkd[1516]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 29 16:07:36.051616 waagent[1906]: 2025-01-29T16:07:36.051560Z INFO Daemon Daemon Provisioning complete Jan 29 16:07:36.073521 waagent[1906]: 2025-01-29T16:07:36.073468Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 29 16:07:36.079906 waagent[1906]: 2025-01-29T16:07:36.079837Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 29 16:07:36.091889 waagent[1906]: 2025-01-29T16:07:36.091432Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 29 16:07:36.222159 waagent[1974]: 2025-01-29T16:07:36.222068Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 29 16:07:36.222941 waagent[1974]: 2025-01-29T16:07:36.222560Z INFO ExtHandler ExtHandler OS: flatcar 4230.0.0 Jan 29 16:07:36.222941 waagent[1974]: 2025-01-29T16:07:36.222634Z INFO ExtHandler ExtHandler Python: 3.11.11 Jan 29 16:07:37.105121 waagent[1974]: 2025-01-29T16:07:37.104723Z INFO ExtHandler ExtHandler Distro: flatcar-4230.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 29 16:07:37.105121 waagent[1974]: 2025-01-29T16:07:37.105026Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 16:07:37.105280 waagent[1974]: 2025-01-29T16:07:37.105118Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 16:07:37.114018 waagent[1974]: 2025-01-29T16:07:37.113931Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 29 16:07:37.120783 waagent[1974]: 2025-01-29T16:07:37.120730Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 29 16:07:37.121378 waagent[1974]: 2025-01-29T16:07:37.121328Z INFO ExtHandler Jan 29 16:07:37.121455 waagent[1974]: 2025-01-29T16:07:37.121421Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 3e8d2150-dab7-439e-978e-2407ac882b39 eTag: 6534944789713476394 source: Fabric] Jan 29 16:07:37.121762 waagent[1974]: 2025-01-29T16:07:37.121720Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 29 16:07:37.128246 waagent[1974]: 2025-01-29T16:07:37.128156Z INFO ExtHandler Jan 29 16:07:37.128345 waagent[1974]: 2025-01-29T16:07:37.128309Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 29 16:07:37.132765 waagent[1974]: 2025-01-29T16:07:37.132622Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 29 16:07:37.214586 waagent[1974]: 2025-01-29T16:07:37.214482Z INFO ExtHandler Downloaded certificate {'thumbprint': '40C137E0AE397FF4C318F7F20C5BB3906F86D8F2', 'hasPrivateKey': True} Jan 29 16:07:37.215013 waagent[1974]: 2025-01-29T16:07:37.214966Z INFO ExtHandler Downloaded certificate {'thumbprint': '2AB93586D0DCF5D34AF67BE27C3D9C63D82D53B1', 'hasPrivateKey': False} Jan 29 16:07:37.215505 waagent[1974]: 2025-01-29T16:07:37.215455Z INFO ExtHandler Fetch goal state completed Jan 29 16:07:37.235353 waagent[1974]: 2025-01-29T16:07:37.235285Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1974 Jan 29 16:07:37.235659 waagent[1974]: 2025-01-29T16:07:37.235482Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 29 16:07:37.237243 waagent[1974]: 2025-01-29T16:07:37.237192Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 29 16:07:37.237629 waagent[1974]: 2025-01-29T16:07:37.237589Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 29 16:07:37.255301 waagent[1974]: 2025-01-29T16:07:37.255253Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 29 16:07:37.255505 waagent[1974]: 2025-01-29T16:07:37.255463Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 29 16:07:37.261785 waagent[1974]: 2025-01-29T16:07:37.261279Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 29 16:07:37.267786 systemd[1]: Reload requested from client PID 1989 ('systemctl') (unit waagent.service)... Jan 29 16:07:37.267800 systemd[1]: Reloading... Jan 29 16:07:37.368116 zram_generator::config[2028]: No configuration found. Jan 29 16:07:37.475179 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:07:37.576470 systemd[1]: Reloading finished in 308 ms. Jan 29 16:07:37.591791 waagent[1974]: 2025-01-29T16:07:37.591411Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 29 16:07:37.596568 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:07:37.598334 systemd[1]: Reload requested from client PID 2085 ('systemctl') (unit waagent.service)... Jan 29 16:07:37.598347 systemd[1]: Reloading... Jan 29 16:07:37.688242 zram_generator::config[2125]: No configuration found. Jan 29 16:07:37.807802 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:07:37.904965 systemd[1]: Reloading finished in 306 ms. Jan 29 16:07:37.926109 waagent[1974]: 2025-01-29T16:07:37.921547Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 29 16:07:37.926109 waagent[1974]: 2025-01-29T16:07:37.921726Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 29 16:07:37.936483 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:07:38.113263 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:07:38.126410 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:07:38.173224 kubelet[2189]: E0129 16:07:38.173164 2189 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:07:38.176630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:07:38.176896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:07:38.177492 systemd[1]: kubelet.service: Consumed 128ms CPU time, 97.8M memory peak. Jan 29 16:07:38.646130 waagent[1974]: 2025-01-29T16:07:38.645705Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 29 16:07:38.646448 waagent[1974]: 2025-01-29T16:07:38.646390Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 29 16:07:38.647346 waagent[1974]: 2025-01-29T16:07:38.647236Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 29 16:07:38.647468 waagent[1974]: 2025-01-29T16:07:38.647389Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 16:07:38.647651 waagent[1974]: 2025-01-29T16:07:38.647554Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 16:07:38.647900 waagent[1974]: 2025-01-29T16:07:38.647851Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 29 16:07:38.648353 waagent[1974]: 2025-01-29T16:07:38.648294Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 29 16:07:38.648503 waagent[1974]: 2025-01-29T16:07:38.648458Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 16:07:38.648655 waagent[1974]: 2025-01-29T16:07:38.648545Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 16:07:38.648842 waagent[1974]: 2025-01-29T16:07:38.648796Z INFO EnvHandler ExtHandler Configure routes Jan 29 16:07:38.649195 waagent[1974]: 2025-01-29T16:07:38.649144Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 29 16:07:38.649430 waagent[1974]: 2025-01-29T16:07:38.649384Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 29 16:07:38.649430 waagent[1974]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 29 16:07:38.649430 waagent[1974]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 29 16:07:38.649430 waagent[1974]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 29 16:07:38.649430 waagent[1974]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 29 16:07:38.649430 waagent[1974]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 29 16:07:38.649430 waagent[1974]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 29 16:07:38.650059 waagent[1974]: 2025-01-29T16:07:38.649954Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 29 16:07:38.650152 waagent[1974]: 2025-01-29T16:07:38.650076Z INFO EnvHandler ExtHandler Gateway:None Jan 29 16:07:38.650222 waagent[1974]: 2025-01-29T16:07:38.650186Z INFO EnvHandler ExtHandler Routes:None Jan 29 16:07:38.650737 waagent[1974]: 2025-01-29T16:07:38.650670Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 29 16:07:38.650950 waagent[1974]: 2025-01-29T16:07:38.650862Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 29 16:07:38.651261 waagent[1974]: 2025-01-29T16:07:38.651212Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 29 16:07:38.657789 waagent[1974]: 2025-01-29T16:07:38.657722Z INFO ExtHandler ExtHandler Jan 29 16:07:38.658408 waagent[1974]: 2025-01-29T16:07:38.658331Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: d4df8fed-645e-40ca-8c1b-41584bf803e6 correlation 7f7e26e2-502b-4fc9-b0bb-302d549adbbc created: 2025-01-29T16:06:14.551987Z] Jan 29 16:07:38.660115 waagent[1974]: 2025-01-29T16:07:38.659701Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 29 16:07:38.660379 waagent[1974]: 2025-01-29T16:07:38.660330Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 29 16:07:38.701136 waagent[1974]: 2025-01-29T16:07:38.700826Z INFO MonitorHandler ExtHandler Network interfaces: Jan 29 16:07:38.701136 waagent[1974]: Executing ['ip', '-a', '-o', 'link']: Jan 29 16:07:38.701136 waagent[1974]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 29 16:07:38.701136 waagent[1974]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b8:e4:3e brd ff:ff:ff:ff:ff:ff Jan 29 16:07:38.701136 waagent[1974]: 3: enP29847s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b8:e4:3e brd ff:ff:ff:ff:ff:ff\ altname enP29847p0s2 Jan 29 16:07:38.701136 waagent[1974]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 29 16:07:38.701136 waagent[1974]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 29 16:07:38.701136 waagent[1974]: 2: eth0 inet 10.200.20.10/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 29 16:07:38.701136 waagent[1974]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 29 16:07:38.701136 waagent[1974]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 29 16:07:38.701136 waagent[1974]: 2: eth0 inet6 fe80::222:48ff:feb8:e43e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 29 16:07:38.701136 waagent[1974]: 3: enP29847s1 inet6 fe80::222:48ff:feb8:e43e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 29 16:07:38.715509 waagent[1974]: 2025-01-29T16:07:38.715420Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 76961052-A3BC-40E1-ABFF-0ACC8590808A;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 29 16:07:38.765192 waagent[1974]: 2025-01-29T16:07:38.765102Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 29 16:07:38.765192 waagent[1974]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 16:07:38.765192 waagent[1974]: pkts bytes target prot opt in out source destination Jan 29 16:07:38.765192 waagent[1974]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 29 16:07:38.765192 waagent[1974]: pkts bytes target prot opt in out source destination Jan 29 16:07:38.765192 waagent[1974]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 16:07:38.765192 waagent[1974]: pkts bytes target prot opt in out source destination Jan 29 16:07:38.765192 waagent[1974]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 29 16:07:38.765192 waagent[1974]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 29 16:07:38.765192 waagent[1974]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 29 16:07:38.768192 waagent[1974]: 2025-01-29T16:07:38.768115Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 29 16:07:38.768192 waagent[1974]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 16:07:38.768192 waagent[1974]: pkts bytes target prot opt in out source destination Jan 29 16:07:38.768192 waagent[1974]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 29 16:07:38.768192 waagent[1974]: pkts bytes target prot opt in out source destination Jan 29 16:07:38.768192 waagent[1974]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 16:07:38.768192 waagent[1974]: pkts bytes target prot opt in out source destination Jan 29 16:07:38.768192 waagent[1974]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 29 16:07:38.768192 waagent[1974]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 29 16:07:38.768192 waagent[1974]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 29 16:07:38.768454 waagent[1974]: 2025-01-29T16:07:38.768414Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 29 16:07:48.296301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:07:48.304325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:07:48.388254 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:07:48.390666 (kubelet)[2233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:07:48.428055 kubelet[2233]: E0129 16:07:48.427964 2233 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:07:48.430241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:07:48.430382 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:07:48.430844 systemd[1]: kubelet.service: Consumed 114ms CPU time, 93.2M memory peak. Jan 29 16:07:48.652670 chronyd[1742]: Selected source PHC0 Jan 29 16:07:58.546384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 16:07:58.553354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:07:58.636052 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:07:58.639232 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:07:58.726686 kubelet[2250]: E0129 16:07:58.726624 2250 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:07:58.728741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:07:58.728865 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:07:58.729440 systemd[1]: kubelet.service: Consumed 123ms CPU time, 96.9M memory peak. Jan 29 16:08:06.527362 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 29 16:08:08.796451 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 16:08:08.805283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:08.886861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:08.890227 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:08:08.957496 kubelet[2266]: E0129 16:08:08.957443 2266 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:08:08.959705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:08:08.959854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:08:08.960311 systemd[1]: kubelet.service: Consumed 116ms CPU time, 94.9M memory peak. Jan 29 16:08:10.281228 update_engine[1732]: I20250129 16:08:10.281147 1732 update_attempter.cc:509] Updating boot flags... Jan 29 16:08:10.354141 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2289) Jan 29 16:08:10.464687 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2279) Jan 29 16:08:19.046300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 16:08:19.051252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:19.331000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:19.334420 (kubelet)[2396]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:08:19.370967 kubelet[2396]: E0129 16:08:19.370893 2396 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:08:19.373542 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:08:19.373689 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:08:19.374191 systemd[1]: kubelet.service: Consumed 115ms CPU time, 96.4M memory peak. Jan 29 16:08:23.646279 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:08:23.654381 systemd[1]: Started sshd@0-10.200.20.10:22-10.200.16.10:44474.service - OpenSSH per-connection server daemon (10.200.16.10:44474). Jan 29 16:08:24.188344 sshd[2405]: Accepted publickey for core from 10.200.16.10 port 44474 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:08:24.189648 sshd-session[2405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:08:24.194182 systemd-logind[1731]: New session 3 of user core. Jan 29 16:08:24.202294 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:08:24.574382 systemd[1]: Started sshd@1-10.200.20.10:22-10.200.16.10:44480.service - OpenSSH per-connection server daemon (10.200.16.10:44480). Jan 29 16:08:25.003289 sshd[2410]: Accepted publickey for core from 10.200.16.10 port 44480 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:08:25.004521 sshd-session[2410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:08:25.009927 systemd-logind[1731]: New session 4 of user core. Jan 29 16:08:25.015228 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:08:25.314206 sshd[2412]: Connection closed by 10.200.16.10 port 44480 Jan 29 16:08:25.314051 sshd-session[2410]: pam_unix(sshd:session): session closed for user core Jan 29 16:08:25.317397 systemd[1]: sshd@1-10.200.20.10:22-10.200.16.10:44480.service: Deactivated successfully. Jan 29 16:08:25.318900 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:08:25.320894 systemd-logind[1731]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:08:25.322009 systemd-logind[1731]: Removed session 4. Jan 29 16:08:25.398309 systemd[1]: Started sshd@2-10.200.20.10:22-10.200.16.10:44484.service - OpenSSH per-connection server daemon (10.200.16.10:44484). Jan 29 16:08:25.836691 sshd[2418]: Accepted publickey for core from 10.200.16.10 port 44484 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:08:25.837935 sshd-session[2418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:08:25.843127 systemd-logind[1731]: New session 5 of user core. Jan 29 16:08:25.849238 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:08:26.152980 sshd[2420]: Connection closed by 10.200.16.10 port 44484 Jan 29 16:08:26.152788 sshd-session[2418]: pam_unix(sshd:session): session closed for user core Jan 29 16:08:26.156398 systemd[1]: sshd@2-10.200.20.10:22-10.200.16.10:44484.service: Deactivated successfully. Jan 29 16:08:26.157894 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:08:26.158909 systemd-logind[1731]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:08:26.159844 systemd-logind[1731]: Removed session 5. Jan 29 16:08:26.238382 systemd[1]: Started sshd@3-10.200.20.10:22-10.200.16.10:47290.service - OpenSSH per-connection server daemon (10.200.16.10:47290). Jan 29 16:08:26.672745 sshd[2426]: Accepted publickey for core from 10.200.16.10 port 47290 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:08:26.674031 sshd-session[2426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:08:26.679719 systemd-logind[1731]: New session 6 of user core. Jan 29 16:08:26.686266 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:08:26.990408 sshd[2428]: Connection closed by 10.200.16.10 port 47290 Jan 29 16:08:26.990217 sshd-session[2426]: pam_unix(sshd:session): session closed for user core Jan 29 16:08:26.994866 systemd[1]: sshd@3-10.200.20.10:22-10.200.16.10:47290.service: Deactivated successfully. Jan 29 16:08:26.997249 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:08:26.998072 systemd-logind[1731]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:08:26.999374 systemd-logind[1731]: Removed session 6. Jan 29 16:08:27.077321 systemd[1]: Started sshd@4-10.200.20.10:22-10.200.16.10:47292.service - OpenSSH per-connection server daemon (10.200.16.10:47292). Jan 29 16:08:27.520641 sshd[2434]: Accepted publickey for core from 10.200.16.10 port 47292 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:08:27.521831 sshd-session[2434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:08:27.525916 systemd-logind[1731]: New session 7 of user core. Jan 29 16:08:27.537210 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:08:27.948201 sudo[2437]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:08:27.948480 sudo[2437]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:08:27.976965 sudo[2437]: pam_unix(sudo:session): session closed for user root Jan 29 16:08:28.085500 sshd[2436]: Connection closed by 10.200.16.10 port 47292 Jan 29 16:08:28.084716 sshd-session[2434]: pam_unix(sshd:session): session closed for user core Jan 29 16:08:28.088455 systemd[1]: sshd@4-10.200.20.10:22-10.200.16.10:47292.service: Deactivated successfully. Jan 29 16:08:28.090079 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:08:28.090738 systemd-logind[1731]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:08:28.091863 systemd-logind[1731]: Removed session 7. Jan 29 16:08:28.165763 systemd[1]: Started sshd@5-10.200.20.10:22-10.200.16.10:47294.service - OpenSSH per-connection server daemon (10.200.16.10:47294). Jan 29 16:08:28.610602 sshd[2443]: Accepted publickey for core from 10.200.16.10 port 47294 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:08:28.611863 sshd-session[2443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:08:28.617319 systemd-logind[1731]: New session 8 of user core. Jan 29 16:08:28.623224 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:08:28.861841 sudo[2447]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:08:28.862325 sudo[2447]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:08:28.865308 sudo[2447]: pam_unix(sudo:session): session closed for user root Jan 29 16:08:28.869587 sudo[2446]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:08:28.869831 sudo[2446]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:08:28.882680 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:08:28.903605 augenrules[2469]: No rules Jan 29 16:08:28.905053 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:08:28.905289 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:08:28.906127 sudo[2446]: pam_unix(sudo:session): session closed for user root Jan 29 16:08:29.014147 sshd[2445]: Connection closed by 10.200.16.10 port 47294 Jan 29 16:08:29.014610 sshd-session[2443]: pam_unix(sshd:session): session closed for user core Jan 29 16:08:29.018449 systemd[1]: sshd@5-10.200.20.10:22-10.200.16.10:47294.service: Deactivated successfully. Jan 29 16:08:29.020041 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:08:29.021490 systemd-logind[1731]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:08:29.022435 systemd-logind[1731]: Removed session 8. Jan 29 16:08:29.095027 systemd[1]: Started sshd@6-10.200.20.10:22-10.200.16.10:47302.service - OpenSSH per-connection server daemon (10.200.16.10:47302). Jan 29 16:08:29.455713 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 16:08:29.467286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:29.537310 sshd[2478]: Accepted publickey for core from 10.200.16.10 port 47302 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:08:29.537833 sshd-session[2478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:08:29.548921 systemd-logind[1731]: New session 9 of user core. Jan 29 16:08:29.552329 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:08:29.555259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:29.558721 (kubelet)[2487]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:08:29.599565 kubelet[2487]: E0129 16:08:29.599429 2487 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:08:29.602276 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:08:29.602461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:08:29.604266 systemd[1]: kubelet.service: Consumed 122ms CPU time, 96.6M memory peak. Jan 29 16:08:29.784556 sudo[2497]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:08:29.784816 sudo[2497]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:08:30.898456 (dockerd)[2515]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:08:30.899026 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:08:31.772779 dockerd[2515]: time="2025-01-29T16:08:31.772532065Z" level=info msg="Starting up" Jan 29 16:08:32.076554 dockerd[2515]: time="2025-01-29T16:08:32.076481100Z" level=info msg="Loading containers: start." Jan 29 16:08:32.287118 kernel: Initializing XFRM netlink socket Jan 29 16:08:32.416798 systemd-networkd[1516]: docker0: Link UP Jan 29 16:08:32.457262 dockerd[2515]: time="2025-01-29T16:08:32.457227344Z" level=info msg="Loading containers: done." Jan 29 16:08:32.475547 dockerd[2515]: time="2025-01-29T16:08:32.475508125Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:08:32.475838 dockerd[2515]: time="2025-01-29T16:08:32.475818206Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:08:32.476021 dockerd[2515]: time="2025-01-29T16:08:32.476004766Z" level=info msg="Daemon has completed initialization" Jan 29 16:08:32.529507 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:08:32.530491 dockerd[2515]: time="2025-01-29T16:08:32.530132989Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:08:33.034773 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4044882740-merged.mount: Deactivated successfully. Jan 29 16:08:34.261144 containerd[1755]: time="2025-01-29T16:08:34.261067409Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 16:08:35.000075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount512984428.mount: Deactivated successfully. Jan 29 16:08:36.616128 containerd[1755]: time="2025-01-29T16:08:36.616041717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:36.619552 containerd[1755]: time="2025-01-29T16:08:36.619318641Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864935" Jan 29 16:08:36.625003 containerd[1755]: time="2025-01-29T16:08:36.624924007Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:36.630646 containerd[1755]: time="2025-01-29T16:08:36.630576254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:36.631795 containerd[1755]: time="2025-01-29T16:08:36.631602295Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.370474486s" Jan 29 16:08:36.631795 containerd[1755]: time="2025-01-29T16:08:36.631646135Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 29 16:08:36.655865 containerd[1755]: time="2025-01-29T16:08:36.655819723Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 16:08:38.286236 containerd[1755]: time="2025-01-29T16:08:38.286190972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:38.288335 containerd[1755]: time="2025-01-29T16:08:38.288283855Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901561" Jan 29 16:08:38.290837 containerd[1755]: time="2025-01-29T16:08:38.290786619Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:38.297131 containerd[1755]: time="2025-01-29T16:08:38.297052387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:38.298301 containerd[1755]: time="2025-01-29T16:08:38.298188789Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.642318985s" Jan 29 16:08:38.298301 containerd[1755]: time="2025-01-29T16:08:38.298219789Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 29 16:08:38.319742 containerd[1755]: time="2025-01-29T16:08:38.319666378Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 16:08:39.595794 containerd[1755]: time="2025-01-29T16:08:39.595748717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:39.599075 containerd[1755]: time="2025-01-29T16:08:39.599010682Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164338" Jan 29 16:08:39.605108 containerd[1755]: time="2025-01-29T16:08:39.603993648Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:39.611140 containerd[1755]: time="2025-01-29T16:08:39.611112538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:39.613306 containerd[1755]: time="2025-01-29T16:08:39.613273821Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.293349363s" Jan 29 16:08:39.613424 containerd[1755]: time="2025-01-29T16:08:39.613407221Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 29 16:08:39.632641 containerd[1755]: time="2025-01-29T16:08:39.632614327Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 16:08:39.796268 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 16:08:39.806464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:39.893756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:39.897066 (kubelet)[2790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:08:39.937774 kubelet[2790]: E0129 16:08:39.937733 2790 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:08:39.940260 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:08:39.940408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:08:39.940839 systemd[1]: kubelet.service: Consumed 118ms CPU time, 92.9M memory peak. Jan 29 16:08:41.160376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4140524666.mount: Deactivated successfully. Jan 29 16:08:42.030075 containerd[1755]: time="2025-01-29T16:08:42.030012355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:42.032314 containerd[1755]: time="2025-01-29T16:08:42.032098117Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662712" Jan 29 16:08:42.035817 containerd[1755]: time="2025-01-29T16:08:42.035765402Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:42.040503 containerd[1755]: time="2025-01-29T16:08:42.040454529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:42.041424 containerd[1755]: time="2025-01-29T16:08:42.040995770Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 2.408160842s" Jan 29 16:08:42.041424 containerd[1755]: time="2025-01-29T16:08:42.041032970Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 29 16:08:42.063209 containerd[1755]: time="2025-01-29T16:08:42.062986519Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 16:08:42.712395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3191897621.mount: Deactivated successfully. Jan 29 16:08:43.583135 containerd[1755]: time="2025-01-29T16:08:43.582378790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:43.586639 containerd[1755]: time="2025-01-29T16:08:43.586586916Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 29 16:08:43.590418 containerd[1755]: time="2025-01-29T16:08:43.590358561Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:43.596690 containerd[1755]: time="2025-01-29T16:08:43.596655490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:43.597939 containerd[1755]: time="2025-01-29T16:08:43.597815531Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.534789051s" Jan 29 16:08:43.597939 containerd[1755]: time="2025-01-29T16:08:43.597850011Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 16:08:43.618675 containerd[1755]: time="2025-01-29T16:08:43.618638040Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 16:08:44.214197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1024490188.mount: Deactivated successfully. Jan 29 16:08:44.239129 containerd[1755]: time="2025-01-29T16:08:44.238431324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:44.240732 containerd[1755]: time="2025-01-29T16:08:44.240694247Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 29 16:08:44.246383 containerd[1755]: time="2025-01-29T16:08:44.246345775Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:44.251437 containerd[1755]: time="2025-01-29T16:08:44.251393662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:44.252400 containerd[1755]: time="2025-01-29T16:08:44.252365863Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 633.689663ms" Jan 29 16:08:44.252515 containerd[1755]: time="2025-01-29T16:08:44.252498623Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 29 16:08:44.273515 containerd[1755]: time="2025-01-29T16:08:44.273478572Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 16:08:45.020660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1376014231.mount: Deactivated successfully. Jan 29 16:08:48.741195 containerd[1755]: time="2025-01-29T16:08:48.740725749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:48.743273 containerd[1755]: time="2025-01-29T16:08:48.742986991Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jan 29 16:08:48.747768 containerd[1755]: time="2025-01-29T16:08:48.747701716Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:48.754756 containerd[1755]: time="2025-01-29T16:08:48.754699284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:48.755975 containerd[1755]: time="2025-01-29T16:08:48.755810005Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.482118073s" Jan 29 16:08:48.755975 containerd[1755]: time="2025-01-29T16:08:48.755854245Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 29 16:08:50.046850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 29 16:08:50.062401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:50.158269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:50.162199 (kubelet)[2978]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:08:50.210655 kubelet[2978]: E0129 16:08:50.210599 2978 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:08:50.214831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:08:50.215215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:08:50.215642 systemd[1]: kubelet.service: Consumed 115ms CPU time, 96.4M memory peak. Jan 29 16:08:54.637537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:54.637852 systemd[1]: kubelet.service: Consumed 115ms CPU time, 96.4M memory peak. Jan 29 16:08:54.644558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:54.663941 systemd[1]: Reload requested from client PID 2993 ('systemctl') (unit session-9.scope)... Jan 29 16:08:54.663962 systemd[1]: Reloading... Jan 29 16:08:54.760132 zram_generator::config[3040]: No configuration found. Jan 29 16:08:54.891523 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:08:54.992064 systemd[1]: Reloading finished in 327 ms. Jan 29 16:08:55.033673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:55.039231 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:55.040365 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:08:55.040627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:55.040681 systemd[1]: kubelet.service: Consumed 77ms CPU time, 82.3M memory peak. Jan 29 16:08:55.042425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:55.132451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:55.136620 (kubelet)[3110]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:08:55.175900 kubelet[3110]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:08:55.177202 kubelet[3110]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:08:55.177202 kubelet[3110]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:08:55.177202 kubelet[3110]: I0129 16:08:55.176317 3110 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:08:55.844020 kubelet[3110]: I0129 16:08:55.843986 3110 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 16:08:55.844200 kubelet[3110]: I0129 16:08:55.844189 3110 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:08:55.844460 kubelet[3110]: I0129 16:08:55.844445 3110 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 16:08:55.858018 kubelet[3110]: E0129 16:08:55.857976 3110 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.10:6443: connect: connection refused Jan 29 16:08:55.859339 kubelet[3110]: I0129 16:08:55.859305 3110 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:08:55.868429 kubelet[3110]: I0129 16:08:55.868406 3110 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:08:55.869742 kubelet[3110]: I0129 16:08:55.869710 3110 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:08:55.869987 kubelet[3110]: I0129 16:08:55.869821 3110 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.0.0-a-732fe1e27c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 16:08:55.870161 kubelet[3110]: I0129 16:08:55.870147 3110 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:08:55.870224 kubelet[3110]: I0129 16:08:55.870215 3110 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 16:08:55.870402 kubelet[3110]: I0129 16:08:55.870388 3110 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:08:55.871233 kubelet[3110]: I0129 16:08:55.871214 3110 kubelet.go:400] "Attempting to sync node with API server" Jan 29 16:08:55.871321 kubelet[3110]: I0129 16:08:55.871311 3110 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:08:55.871398 kubelet[3110]: I0129 16:08:55.871389 3110 kubelet.go:312] "Adding apiserver pod source" Jan 29 16:08:55.871456 kubelet[3110]: I0129 16:08:55.871448 3110 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:08:55.874246 kubelet[3110]: W0129 16:08:55.872956 3110 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.0-a-732fe1e27c&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 29 16:08:55.874246 kubelet[3110]: E0129 16:08:55.873009 3110 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.0-a-732fe1e27c&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 29 16:08:55.874246 kubelet[3110]: W0129 16:08:55.873321 3110 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 29 16:08:55.874246 kubelet[3110]: E0129 16:08:55.873356 3110 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 29 16:08:55.874246 kubelet[3110]: I0129 16:08:55.873643 3110 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:08:55.874246 kubelet[3110]: I0129 16:08:55.873788 3110 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:08:55.874246 kubelet[3110]: W0129 16:08:55.873827 3110 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:08:55.874574 kubelet[3110]: I0129 16:08:55.874551 3110 server.go:1264] "Started kubelet" Jan 29 16:08:55.879363 kubelet[3110]: I0129 16:08:55.877911 3110 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:08:55.879363 kubelet[3110]: E0129 16:08:55.878153 3110 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.0.0-a-732fe1e27c.181f359e752164ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.0-a-732fe1e27c,UID:ci-4230.0.0-a-732fe1e27c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.0.0-a-732fe1e27c,},FirstTimestamp:2025-01-29 16:08:55.874528495 +0000 UTC m=+0.734483930,LastTimestamp:2025-01-29 16:08:55.874528495 +0000 UTC m=+0.734483930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.0-a-732fe1e27c,}" Jan 29 16:08:55.879363 kubelet[3110]: I0129 16:08:55.879243 3110 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:08:55.880942 kubelet[3110]: I0129 16:08:55.880923 3110 server.go:455] "Adding debug handlers to kubelet server" Jan 29 16:08:55.881861 kubelet[3110]: I0129 16:08:55.881818 3110 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:08:55.882120 kubelet[3110]: I0129 16:08:55.882106 3110 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:08:55.883333 kubelet[3110]: I0129 16:08:55.883308 3110 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 16:08:55.884305 kubelet[3110]: E0129 16:08:55.884274 3110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.0-a-732fe1e27c?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="200ms" Jan 29 16:08:55.884725 kubelet[3110]: I0129 16:08:55.884704 3110 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:08:55.884902 kubelet[3110]: I0129 16:08:55.884885 3110 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:08:55.886215 kubelet[3110]: I0129 16:08:55.886197 3110 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:08:55.886590 kubelet[3110]: I0129 16:08:55.886566 3110 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:08:55.886654 kubelet[3110]: I0129 16:08:55.886636 3110 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:08:55.903112 kubelet[3110]: W0129 16:08:55.903058 3110 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 29 16:08:55.903240 kubelet[3110]: E0129 16:08:55.903227 3110 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 29 16:08:55.904384 kubelet[3110]: E0129 16:08:55.904361 3110 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:08:55.909328 kubelet[3110]: I0129 16:08:55.909290 3110 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:08:55.910505 kubelet[3110]: I0129 16:08:55.910485 3110 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:08:55.910595 kubelet[3110]: I0129 16:08:55.910586 3110 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:08:55.910730 kubelet[3110]: I0129 16:08:55.910718 3110 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 16:08:55.911041 kubelet[3110]: E0129 16:08:55.910811 3110 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:08:55.911738 kubelet[3110]: W0129 16:08:55.911674 3110 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 29 16:08:55.911738 kubelet[3110]: E0129 16:08:55.911737 3110 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 29 16:08:55.913349 kubelet[3110]: I0129 16:08:55.913324 3110 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:08:55.913511 kubelet[3110]: I0129 16:08:55.913459 3110 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:08:55.913511 kubelet[3110]: I0129 16:08:55.913480 3110 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:08:55.929753 kubelet[3110]: I0129 16:08:55.929724 3110 policy_none.go:49] "None policy: Start" Jan 29 16:08:55.930555 kubelet[3110]: I0129 16:08:55.930535 3110 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:08:55.930622 kubelet[3110]: I0129 16:08:55.930568 3110 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:08:55.938608 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:08:55.947870 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:08:55.950745 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:08:55.961049 kubelet[3110]: I0129 16:08:55.961019 3110 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:08:55.961272 kubelet[3110]: I0129 16:08:55.961232 3110 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:08:55.961576 kubelet[3110]: I0129 16:08:55.961339 3110 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:08:55.963516 kubelet[3110]: E0129 16:08:55.963451 3110 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.0.0-a-732fe1e27c\" not found" Jan 29 16:08:55.985197 kubelet[3110]: I0129 16:08:55.985166 3110 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:55.985511 kubelet[3110]: E0129 16:08:55.985480 3110 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:56.011900 kubelet[3110]: I0129 16:08:56.011862 3110 topology_manager.go:215] "Topology Admit Handler" podUID="10664176655891879f1bd7900f0f841d" podNamespace="kube-system" podName="kube-apiserver-ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:56.013267 kubelet[3110]: I0129 16:08:56.013236 3110 topology_manager.go:215] "Topology Admit Handler" podUID="68743d9deddebebf5b9e324851f46b49" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:56.014714 kubelet[3110]: I0129 16:08:56.014673 3110 topology_manager.go:215] "Topology Admit Handler" podUID="40a747a4ade56c42d2e5800e5efb8bbe" podNamespace="kube-system" podName="kube-scheduler-ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:56.021381 systemd[1]: Created slice kubepods-burstable-pod10664176655891879f1bd7900f0f841d.slice - libcontainer container kubepods-burstable-pod10664176655891879f1bd7900f0f841d.slice. Jan 29 16:08:56.041947 systemd[1]: Created slice kubepods-burstable-pod68743d9deddebebf5b9e324851f46b49.slice - libcontainer container kubepods-burstable-pod68743d9deddebebf5b9e324851f46b49.slice. Jan 29 16:08:56.046669 systemd[1]: Created slice kubepods-burstable-pod40a747a4ade56c42d2e5800e5efb8bbe.slice - libcontainer container kubepods-burstable-pod40a747a4ade56c42d2e5800e5efb8bbe.slice. Jan 29 16:08:56.085018 kubelet[3110]: E0129 16:08:56.084971 3110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.0-a-732fe1e27c?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="400ms" Jan 29 16:08:56.088287 kubelet[3110]: I0129 16:08:56.088257 3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68743d9deddebebf5b9e324851f46b49-k8s-certs\") pod \"kube-controller-manager-ci-4230.0.0-a-732fe1e27c\" (UID: \"68743d9deddebebf5b9e324851f46b49\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:56.088474 kubelet[3110]: I0129 16:08:56.088296 3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68743d9deddebebf5b9e324851f46b49-kubeconfig\") pod \"kube-controller-manager-ci-4230.0.0-a-732fe1e27c\" (UID: \"68743d9deddebebf5b9e324851f46b49\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:56.088474 kubelet[3110]: I0129 16:08:56.088317 3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68743d9deddebebf5b9e324851f46b49-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.0.0-a-732fe1e27c\" (UID: \"68743d9deddebebf5b9e324851f46b49\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:56.088474 kubelet[3110]: I0129 16:08:56.088336 3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40a747a4ade56c42d2e5800e5efb8bbe-kubeconfig\") pod \"kube-scheduler-ci-4230.0.0-a-732fe1e27c\" (UID: \"40a747a4ade56c42d2e5800e5efb8bbe\") " pod="kube-system/kube-scheduler-ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:56.088474 kubelet[3110]: I0129 16:08:56.088363 3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68743d9deddebebf5b9e324851f46b49-ca-certs\") pod \"kube-controller-manager-ci-4230.0.0-a-732fe1e27c\" (UID: \"68743d9deddebebf5b9e324851f46b49\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:56.088474 kubelet[3110]: I0129 16:08:56.088378 3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10664176655891879f1bd7900f0f841d-k8s-certs\") pod \"kube-apiserver-ci-4230.0.0-a-732fe1e27c\" (UID: \"10664176655891879f1bd7900f0f841d\") " pod="kube-system/kube-apiserver-ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:56.088612 kubelet[3110]: I0129 16:08:56.088399 3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10664176655891879f1bd7900f0f841d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.0.0-a-732fe1e27c\" (UID: \"10664176655891879f1bd7900f0f841d\") " pod="kube-system/kube-apiserver-ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:56.088612 kubelet[3110]: I0129 16:08:56.088415 3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/68743d9deddebebf5b9e324851f46b49-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.0.0-a-732fe1e27c\" (UID: \"68743d9deddebebf5b9e324851f46b49\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:56.088612 kubelet[3110]: I0129 16:08:56.088438 3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10664176655891879f1bd7900f0f841d-ca-certs\") pod \"kube-apiserver-ci-4230.0.0-a-732fe1e27c\" (UID: \"10664176655891879f1bd7900f0f841d\") " pod="kube-system/kube-apiserver-ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:56.187346 kubelet[3110]: I0129 16:08:56.187215 3110 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:56.188409 kubelet[3110]: E0129 16:08:56.188337 3110 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:56.340638 containerd[1755]: time="2025-01-29T16:08:56.340450765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.0.0-a-732fe1e27c,Uid:10664176655891879f1bd7900f0f841d,Namespace:kube-system,Attempt:0,}" Jan 29 16:08:56.345861 containerd[1755]: time="2025-01-29T16:08:56.345676411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.0.0-a-732fe1e27c,Uid:68743d9deddebebf5b9e324851f46b49,Namespace:kube-system,Attempt:0,}" Jan 29 16:08:56.351053 containerd[1755]: time="2025-01-29T16:08:56.350383097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.0.0-a-732fe1e27c,Uid:40a747a4ade56c42d2e5800e5efb8bbe,Namespace:kube-system,Attempt:0,}" Jan 29 16:08:56.486006 kubelet[3110]: E0129 16:08:56.485890 3110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.0-a-732fe1e27c?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="800ms" Jan 29 16:08:56.590554 kubelet[3110]: I0129 16:08:56.590203 3110 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:56.590554 kubelet[3110]: E0129 16:08:56.590514 3110 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:56.723471 kubelet[3110]: W0129 16:08:56.723418 3110 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 29 16:08:56.723471 kubelet[3110]: E0129 16:08:56.723470 3110 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 29 16:08:56.762150 kubelet[3110]: W0129 16:08:56.762048 3110 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 29 16:08:56.762150 kubelet[3110]: E0129 16:08:56.762104 3110 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 29 16:08:57.043236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1263366591.mount: Deactivated successfully. Jan 29 16:08:57.072216 containerd[1755]: time="2025-01-29T16:08:57.071638770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:08:57.078283 containerd[1755]: time="2025-01-29T16:08:57.078234378Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 29 16:08:57.091662 containerd[1755]: time="2025-01-29T16:08:57.091006074Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:08:57.095962 containerd[1755]: time="2025-01-29T16:08:57.095918160Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:08:57.099027 containerd[1755]: time="2025-01-29T16:08:57.098982044Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:08:57.106870 containerd[1755]: time="2025-01-29T16:08:57.106829694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:08:57.107723 containerd[1755]: time="2025-01-29T16:08:57.107696175Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 767.15745ms" Jan 29 16:08:57.112465 containerd[1755]: time="2025-01-29T16:08:57.112431581Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:08:57.124401 containerd[1755]: time="2025-01-29T16:08:57.124338196Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:08:57.125914 containerd[1755]: time="2025-01-29T16:08:57.125722758Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 779.975347ms" Jan 29 16:08:57.145109 containerd[1755]: time="2025-01-29T16:08:57.144911622Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 794.045404ms" Jan 29 16:08:57.287729 kubelet[3110]: E0129 16:08:57.287677 3110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.0-a-732fe1e27c?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="1.6s" Jan 29 16:08:57.366186 kubelet[3110]: W0129 16:08:57.366116 3110 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 29 16:08:57.366186 kubelet[3110]: E0129 16:08:57.366182 3110 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 29 16:08:57.378651 kubelet[3110]: W0129 16:08:57.378606 3110 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.0-a-732fe1e27c&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 29 16:08:57.378651 kubelet[3110]: E0129 16:08:57.378654 3110 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.0-a-732fe1e27c&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused Jan 29 16:08:57.392488 kubelet[3110]: I0129 16:08:57.392209 3110 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:57.392613 kubelet[3110]: E0129 16:08:57.392486 3110 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4230.0.0-a-732fe1e27c" Jan 29 16:08:57.676359 containerd[1755]: time="2025-01-29T16:08:57.675848534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:08:57.676359 containerd[1755]: time="2025-01-29T16:08:57.675953854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:08:57.676359 containerd[1755]: time="2025-01-29T16:08:57.675973694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:08:57.678277 containerd[1755]: time="2025-01-29T16:08:57.676210815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:08:57.680142 containerd[1755]: time="2025-01-29T16:08:57.679800299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:08:57.680142 containerd[1755]: time="2025-01-29T16:08:57.680027139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:08:57.680670 containerd[1755]: time="2025-01-29T16:08:57.680602020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:08:57.681580 containerd[1755]: time="2025-01-29T16:08:57.681503221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:08:57.682392 containerd[1755]: time="2025-01-29T16:08:57.681721942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:08:57.682392 containerd[1755]: time="2025-01-29T16:08:57.681773622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:08:57.682392 containerd[1755]: time="2025-01-29T16:08:57.681785462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:08:57.682392 containerd[1755]: time="2025-01-29T16:08:57.681852142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:08:57.812286 systemd[1]: Started cri-containerd-2230ac7d8317aa0fd4685d094ff48e4223cacb17c473504bea8facfa7d6e60ef.scope - libcontainer container 2230ac7d8317aa0fd4685d094ff48e4223cacb17c473504bea8facfa7d6e60ef. Jan 29 16:08:57.813407 systemd[1]: Started cri-containerd-ed8e25883162f22e66ecb4288a34b10a8113c0357d2fa6364d3b4a453acee43a.scope - libcontainer container ed8e25883162f22e66ecb4288a34b10a8113c0357d2fa6364d3b4a453acee43a. Jan 29 16:08:57.815544 systemd[1]: Started cri-containerd-f38d46210ec506e4bee6d304bb4abf849a9bf6e740c3c5d32fe008daba7ff5f0.scope - libcontainer container f38d46210ec506e4bee6d304bb4abf849a9bf6e740c3c5d32fe008daba7ff5f0. Jan 29 16:08:57.864617 containerd[1755]: time="2025-01-29T16:08:57.863678012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.0.0-a-732fe1e27c,Uid:10664176655891879f1bd7900f0f841d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f38d46210ec506e4bee6d304bb4abf849a9bf6e740c3c5d32fe008daba7ff5f0\"" Jan 29 16:08:57.865198 containerd[1755]: time="2025-01-29T16:08:57.863911332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.0.0-a-732fe1e27c,Uid:68743d9deddebebf5b9e324851f46b49,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed8e25883162f22e66ecb4288a34b10a8113c0357d2fa6364d3b4a453acee43a\"" Jan 29 16:08:57.870835 containerd[1755]: time="2025-01-29T16:08:57.870555020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.0.0-a-732fe1e27c,Uid:40a747a4ade56c42d2e5800e5efb8bbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"2230ac7d8317aa0fd4685d094ff48e4223cacb17c473504bea8facfa7d6e60ef\"" Jan 29 16:08:57.871718 containerd[1755]: time="2025-01-29T16:08:57.871498102Z" level=info msg="CreateContainer within sandbox \"ed8e25883162f22e66ecb4288a34b10a8113c0357d2fa6364d3b4a453acee43a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:08:57.871718 containerd[1755]: time="2025-01-29T16:08:57.871621502Z" level=info msg="CreateContainer within sandbox \"f38d46210ec506e4bee6d304bb4abf849a9bf6e740c3c5d32fe008daba7ff5f0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:08:57.874584 containerd[1755]: time="2025-01-29T16:08:57.874548426Z" level=info msg="CreateContainer within sandbox \"2230ac7d8317aa0fd4685d094ff48e4223cacb17c473504bea8facfa7d6e60ef\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:08:57.949776 kubelet[3110]: E0129 16:08:57.949651 3110 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.10:6443: connect: connection refused Jan 29 16:08:58.182365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4149699572.mount: Deactivated successfully. Jan 29 16:08:58.198788 containerd[1755]: time="2025-01-29T16:08:58.198746596Z" level=info msg="CreateContainer within sandbox \"f38d46210ec506e4bee6d304bb4abf849a9bf6e740c3c5d32fe008daba7ff5f0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"21f524046566a48d8ec5ded32527328eea4cafe8be628a0a41155ede7af46f0b\"" Jan 29 16:08:58.199742 containerd[1755]: time="2025-01-29T16:08:58.199711437Z" level=info msg="StartContainer for \"21f524046566a48d8ec5ded32527328eea4cafe8be628a0a41155ede7af46f0b\"" Jan 29 16:08:58.215756 containerd[1755]: time="2025-01-29T16:08:58.215645137Z" level=info msg="CreateContainer within sandbox \"2230ac7d8317aa0fd4685d094ff48e4223cacb17c473504bea8facfa7d6e60ef\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9e03a3cdbc8e9f97bc34fe0bb8cdbd26915790911bc6727e01be47ca7cd27bab\"" Jan 29 16:08:58.216427 containerd[1755]: time="2025-01-29T16:08:58.216394138Z" level=info msg="StartContainer for \"9e03a3cdbc8e9f97bc34fe0bb8cdbd26915790911bc6727e01be47ca7cd27bab\"" Jan 29 16:08:58.220570 containerd[1755]: time="2025-01-29T16:08:58.219976063Z" level=info msg="CreateContainer within sandbox \"ed8e25883162f22e66ecb4288a34b10a8113c0357d2fa6364d3b4a453acee43a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8e4e154d30ba32b8c1f9cc5bff2f1df6d8a40e2bb74a808d1b5bbc9c4ffa2c84\"" Jan 29 16:08:58.221634 containerd[1755]: time="2025-01-29T16:08:58.221601345Z" level=info msg="StartContainer for \"8e4e154d30ba32b8c1f9cc5bff2f1df6d8a40e2bb74a808d1b5bbc9c4ffa2c84\"" Jan 29 16:08:58.224476 systemd[1]: Started cri-containerd-21f524046566a48d8ec5ded32527328eea4cafe8be628a0a41155ede7af46f0b.scope - libcontainer container 21f524046566a48d8ec5ded32527328eea4cafe8be628a0a41155ede7af46f0b. Jan 29 16:08:58.255330 systemd[1]: Started cri-containerd-9e03a3cdbc8e9f97bc34fe0bb8cdbd26915790911bc6727e01be47ca7cd27bab.scope - libcontainer container 9e03a3cdbc8e9f97bc34fe0bb8cdbd26915790911bc6727e01be47ca7cd27bab. Jan 29 16:08:58.265336 systemd[1]: Started cri-containerd-8e4e154d30ba32b8c1f9cc5bff2f1df6d8a40e2bb74a808d1b5bbc9c4ffa2c84.scope - libcontainer container 8e4e154d30ba32b8c1f9cc5bff2f1df6d8a40e2bb74a808d1b5bbc9c4ffa2c84. Jan 29 16:08:58.296804 containerd[1755]: time="2025-01-29T16:08:58.296751320Z" level=info msg="StartContainer for \"21f524046566a48d8ec5ded32527328eea4cafe8be628a0a41155ede7af46f0b\" returns successfully" Jan 29 16:08:58.326930 containerd[1755]: time="2025-01-29T16:08:58.326877358Z" level=info msg="StartContainer for \"9e03a3cdbc8e9f97bc34fe0bb8cdbd26915790911bc6727e01be47ca7cd27bab\" returns successfully" Jan 29 16:08:58.333291 containerd[1755]: time="2025-01-29T16:08:58.333159006Z" level=info msg="StartContainer for \"8e4e154d30ba32b8c1f9cc5bff2f1df6d8a40e2bb74a808d1b5bbc9c4ffa2c84\" returns successfully" Jan 29 16:08:58.996573 kubelet[3110]: I0129 16:08:58.996527 3110 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:00.369592 kubelet[3110]: E0129 16:09:00.369549 3110 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.0.0-a-732fe1e27c\" not found" node="ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:00.458986 kubelet[3110]: E0129 16:09:00.458881 3110 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.0.0-a-732fe1e27c.181f359e752164ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.0-a-732fe1e27c,UID:ci-4230.0.0-a-732fe1e27c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.0.0-a-732fe1e27c,},FirstTimestamp:2025-01-29 16:08:55.874528495 +0000 UTC m=+0.734483930,LastTimestamp:2025-01-29 16:08:55.874528495 +0000 UTC m=+0.734483930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.0-a-732fe1e27c,}" Jan 29 16:09:00.513011 kubelet[3110]: I0129 16:09:00.512970 3110 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:00.556953 kubelet[3110]: E0129 16:09:00.556843 3110 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.0.0-a-732fe1e27c.181f359e76e86835 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.0-a-732fe1e27c,UID:ci-4230.0.0-a-732fe1e27c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4230.0.0-a-732fe1e27c,},FirstTimestamp:2025-01-29 16:08:55.904348213 +0000 UTC m=+0.764303688,LastTimestamp:2025-01-29 16:08:55.904348213 +0000 UTC m=+0.764303688,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.0-a-732fe1e27c,}" Jan 29 16:09:00.654221 kubelet[3110]: E0129 16:09:00.652547 3110 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.0.0-a-732fe1e27c.181f359e775c132f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.0-a-732fe1e27c,UID:ci-4230.0.0-a-732fe1e27c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4230.0.0-a-732fe1e27c status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4230.0.0-a-732fe1e27c,},FirstTimestamp:2025-01-29 16:08:55.911928623 +0000 UTC m=+0.771884058,LastTimestamp:2025-01-29 16:08:55.911928623 +0000 UTC m=+0.771884058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.0-a-732fe1e27c,}" Jan 29 16:09:00.710892 kubelet[3110]: E0129 16:09:00.710631 3110 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.0.0-a-732fe1e27c.181f359e775c254f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.0-a-732fe1e27c,UID:ci-4230.0.0-a-732fe1e27c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ci-4230.0.0-a-732fe1e27c status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ci-4230.0.0-a-732fe1e27c,},FirstTimestamp:2025-01-29 16:08:55.911933263 +0000 UTC m=+0.771888698,LastTimestamp:2025-01-29 16:08:55.911933263 +0000 UTC m=+0.771888698,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.0-a-732fe1e27c,}" Jan 29 16:09:00.875013 kubelet[3110]: I0129 16:09:00.874966 3110 apiserver.go:52] "Watching apiserver" Jan 29 16:09:00.887763 kubelet[3110]: I0129 16:09:00.887721 3110 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:09:01.001855 kubelet[3110]: E0129 16:09:00.999372 3110 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.0.0-a-732fe1e27c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:01.002101 kubelet[3110]: E0129 16:09:01.002058 3110 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.0.0-a-732fe1e27c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:02.580571 systemd[1]: Reload requested from client PID 3388 ('systemctl') (unit session-9.scope)... Jan 29 16:09:02.581888 systemd[1]: Reloading... Jan 29 16:09:02.673159 zram_generator::config[3431]: No configuration found. Jan 29 16:09:02.807162 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:09:02.925017 systemd[1]: Reloading finished in 342 ms. Jan 29 16:09:02.950618 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:09:02.950948 kubelet[3110]: I0129 16:09:02.950808 3110 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:09:02.959708 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:09:02.960046 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:09:02.960207 systemd[1]: kubelet.service: Consumed 1.093s CPU time, 113.2M memory peak. Jan 29 16:09:02.964696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:09:03.203673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:09:03.209857 (kubelet)[3498]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:09:03.265204 kubelet[3498]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:09:03.265204 kubelet[3498]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:09:03.265204 kubelet[3498]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:09:03.265204 kubelet[3498]: I0129 16:09:03.263220 3498 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:09:03.271896 kubelet[3498]: I0129 16:09:03.271861 3498 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 16:09:03.271896 kubelet[3498]: I0129 16:09:03.271891 3498 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:09:03.272096 kubelet[3498]: I0129 16:09:03.272066 3498 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 16:09:03.274051 kubelet[3498]: I0129 16:09:03.274030 3498 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:09:03.277125 kubelet[3498]: I0129 16:09:03.276953 3498 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:09:03.282990 kubelet[3498]: I0129 16:09:03.282972 3498 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:09:03.283336 kubelet[3498]: I0129 16:09:03.283313 3498 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:09:03.283545 kubelet[3498]: I0129 16:09:03.283392 3498 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.0.0-a-732fe1e27c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 16:09:03.283663 kubelet[3498]: I0129 16:09:03.283652 3498 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:09:03.283716 kubelet[3498]: I0129 16:09:03.283708 3498 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 16:09:03.283804 kubelet[3498]: I0129 16:09:03.283795 3498 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:09:03.284101 kubelet[3498]: I0129 16:09:03.283944 3498 kubelet.go:400] "Attempting to sync node with API server" Jan 29 16:09:03.284101 kubelet[3498]: I0129 16:09:03.283960 3498 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:09:03.284101 kubelet[3498]: I0129 16:09:03.283988 3498 kubelet.go:312] "Adding apiserver pod source" Jan 29 16:09:03.284101 kubelet[3498]: I0129 16:09:03.284009 3498 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:09:03.285854 kubelet[3498]: I0129 16:09:03.285828 3498 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:09:03.286268 kubelet[3498]: I0129 16:09:03.286244 3498 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:09:03.286644 kubelet[3498]: I0129 16:09:03.286621 3498 server.go:1264] "Started kubelet" Jan 29 16:09:03.289948 kubelet[3498]: I0129 16:09:03.289512 3498 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:09:03.289948 kubelet[3498]: I0129 16:09:03.289741 3498 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:09:03.289948 kubelet[3498]: I0129 16:09:03.289769 3498 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:09:03.291727 kubelet[3498]: I0129 16:09:03.291703 3498 server.go:455] "Adding debug handlers to kubelet server" Jan 29 16:09:03.291956 kubelet[3498]: I0129 16:09:03.291925 3498 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:09:03.298933 kubelet[3498]: I0129 16:09:03.298889 3498 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 16:09:03.299380 kubelet[3498]: I0129 16:09:03.299352 3498 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:09:03.299518 kubelet[3498]: I0129 16:09:03.299499 3498 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:09:03.306207 kubelet[3498]: I0129 16:09:03.306178 3498 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:09:03.307843 kubelet[3498]: I0129 16:09:03.307825 3498 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:09:03.308211 kubelet[3498]: I0129 16:09:03.307952 3498 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:09:03.308211 kubelet[3498]: I0129 16:09:03.307972 3498 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 16:09:03.308211 kubelet[3498]: E0129 16:09:03.308009 3498 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:09:03.315455 kubelet[3498]: I0129 16:09:03.315427 3498 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:09:03.315455 kubelet[3498]: I0129 16:09:03.315448 3498 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:09:03.315547 kubelet[3498]: I0129 16:09:03.315530 3498 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:09:03.392571 kubelet[3498]: I0129 16:09:03.392476 3498 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:09:03.392571 kubelet[3498]: I0129 16:09:03.392498 3498 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:09:03.392571 kubelet[3498]: I0129 16:09:03.392521 3498 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:09:03.853349 kubelet[3498]: I0129 16:09:03.402141 3498 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:03.853349 kubelet[3498]: E0129 16:09:03.409007 3498 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 16:09:03.853349 kubelet[3498]: I0129 16:09:03.412382 3498 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:03.853349 kubelet[3498]: E0129 16:09:03.609905 3498 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 16:09:03.853349 kubelet[3498]: I0129 16:09:03.851800 3498 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:03.853847 kubelet[3498]: I0129 16:09:03.853775 3498 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:09:03.853847 kubelet[3498]: I0129 16:09:03.853796 3498 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:09:03.853847 kubelet[3498]: I0129 16:09:03.853818 3498 policy_none.go:49] "None policy: Start" Jan 29 16:09:03.856069 kubelet[3498]: I0129 16:09:03.855584 3498 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:09:03.856069 kubelet[3498]: I0129 16:09:03.855611 3498 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:09:03.856069 kubelet[3498]: I0129 16:09:03.855779 3498 state_mem.go:75] "Updated machine memory state" Jan 29 16:09:03.863998 kubelet[3498]: I0129 16:09:03.863968 3498 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:09:03.864521 kubelet[3498]: I0129 16:09:03.864138 3498 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:09:03.864521 kubelet[3498]: I0129 16:09:03.864414 3498 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:09:03.881599 sudo[3529]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 16:09:03.881924 sudo[3529]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 16:09:04.010715 kubelet[3498]: I0129 16:09:04.010523 3498 topology_manager.go:215] "Topology Admit Handler" podUID="10664176655891879f1bd7900f0f841d" podNamespace="kube-system" podName="kube-apiserver-ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:04.010715 kubelet[3498]: I0129 16:09:04.010647 3498 topology_manager.go:215] "Topology Admit Handler" podUID="68743d9deddebebf5b9e324851f46b49" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:04.013440 kubelet[3498]: I0129 16:09:04.011325 3498 topology_manager.go:215] "Topology Admit Handler" podUID="40a747a4ade56c42d2e5800e5efb8bbe" podNamespace="kube-system" podName="kube-scheduler-ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:04.023968 kubelet[3498]: W0129 16:09:04.023470 3498 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:09:04.029207 kubelet[3498]: W0129 16:09:04.029124 3498 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:09:04.029548 kubelet[3498]: W0129 16:09:04.029471 3498 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:09:04.104685 kubelet[3498]: I0129 16:09:04.103986 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10664176655891879f1bd7900f0f841d-ca-certs\") pod \"kube-apiserver-ci-4230.0.0-a-732fe1e27c\" (UID: \"10664176655891879f1bd7900f0f841d\") " pod="kube-system/kube-apiserver-ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:04.104685 kubelet[3498]: I0129 16:09:04.104025 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68743d9deddebebf5b9e324851f46b49-ca-certs\") pod \"kube-controller-manager-ci-4230.0.0-a-732fe1e27c\" (UID: \"68743d9deddebebf5b9e324851f46b49\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:04.104685 kubelet[3498]: I0129 16:09:04.104049 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68743d9deddebebf5b9e324851f46b49-k8s-certs\") pod \"kube-controller-manager-ci-4230.0.0-a-732fe1e27c\" (UID: \"68743d9deddebebf5b9e324851f46b49\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:04.104685 kubelet[3498]: I0129 16:09:04.104066 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68743d9deddebebf5b9e324851f46b49-kubeconfig\") pod \"kube-controller-manager-ci-4230.0.0-a-732fe1e27c\" (UID: \"68743d9deddebebf5b9e324851f46b49\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:04.104685 kubelet[3498]: I0129 16:09:04.104101 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68743d9deddebebf5b9e324851f46b49-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.0.0-a-732fe1e27c\" (UID: \"68743d9deddebebf5b9e324851f46b49\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:04.104888 kubelet[3498]: I0129 16:09:04.104128 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40a747a4ade56c42d2e5800e5efb8bbe-kubeconfig\") pod \"kube-scheduler-ci-4230.0.0-a-732fe1e27c\" (UID: \"40a747a4ade56c42d2e5800e5efb8bbe\") " pod="kube-system/kube-scheduler-ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:04.104888 kubelet[3498]: I0129 16:09:04.104146 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10664176655891879f1bd7900f0f841d-k8s-certs\") pod \"kube-apiserver-ci-4230.0.0-a-732fe1e27c\" (UID: \"10664176655891879f1bd7900f0f841d\") " pod="kube-system/kube-apiserver-ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:04.104888 kubelet[3498]: I0129 16:09:04.104173 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10664176655891879f1bd7900f0f841d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.0.0-a-732fe1e27c\" (UID: \"10664176655891879f1bd7900f0f841d\") " pod="kube-system/kube-apiserver-ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:04.104888 kubelet[3498]: I0129 16:09:04.104201 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/68743d9deddebebf5b9e324851f46b49-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.0.0-a-732fe1e27c\" (UID: \"68743d9deddebebf5b9e324851f46b49\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-732fe1e27c" Jan 29 16:09:04.290780 kubelet[3498]: I0129 16:09:04.289541 3498 apiserver.go:52] "Watching apiserver" Jan 29 16:09:04.300024 kubelet[3498]: I0129 16:09:04.299898 3498 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:09:04.365043 sudo[3529]: pam_unix(sudo:session): session closed for user root Jan 29 16:09:04.377988 kubelet[3498]: I0129 16:09:04.377421 3498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.0.0-a-732fe1e27c" podStartSLOduration=0.377403392 podStartE2EDuration="377.403392ms" podCreationTimestamp="2025-01-29 16:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:09:04.358761048 +0000 UTC m=+1.143594561" watchObservedRunningTime="2025-01-29 16:09:04.377403392 +0000 UTC m=+1.162236825" Jan 29 16:09:04.398318 kubelet[3498]: I0129 16:09:04.398163 3498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.0.0-a-732fe1e27c" podStartSLOduration=0.398148738 podStartE2EDuration="398.148738ms" podCreationTimestamp="2025-01-29 16:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:09:04.377862792 +0000 UTC m=+1.162696265" watchObservedRunningTime="2025-01-29 16:09:04.398148738 +0000 UTC m=+1.182982211" Jan 29 16:09:04.417151 kubelet[3498]: I0129 16:09:04.416898 3498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.0.0-a-732fe1e27c" podStartSLOduration=0.416880041 podStartE2EDuration="416.880041ms" podCreationTimestamp="2025-01-29 16:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:09:04.399351219 +0000 UTC m=+1.184184692" watchObservedRunningTime="2025-01-29 16:09:04.416880041 +0000 UTC m=+1.201713514" Jan 29 16:09:05.836624 sudo[2497]: pam_unix(sudo:session): session closed for user root Jan 29 16:09:05.905535 sshd[2489]: Connection closed by 10.200.16.10 port 47302 Jan 29 16:09:05.906332 sshd-session[2478]: pam_unix(sshd:session): session closed for user core Jan 29 16:09:05.909779 systemd[1]: sshd@6-10.200.20.10:22-10.200.16.10:47302.service: Deactivated successfully. Jan 29 16:09:05.912792 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:09:05.913189 systemd[1]: session-9.scope: Consumed 7.454s CPU time, 293.6M memory peak. Jan 29 16:09:05.915311 systemd-logind[1731]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:09:05.916461 systemd-logind[1731]: Removed session 9. Jan 29 16:09:17.243701 kubelet[3498]: I0129 16:09:17.243661 3498 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:09:17.244161 containerd[1755]: time="2025-01-29T16:09:17.243987090Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:09:17.244380 kubelet[3498]: I0129 16:09:17.244177 3498 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:09:18.088029 kubelet[3498]: I0129 16:09:18.087479 3498 topology_manager.go:215] "Topology Admit Handler" podUID="c6fe558f-51a1-457d-9f1c-8b904b1cc982" podNamespace="kube-system" podName="kube-proxy-9g57t" Jan 29 16:09:18.096714 systemd[1]: Created slice kubepods-besteffort-podc6fe558f_51a1_457d_9f1c_8b904b1cc982.slice - libcontainer container kubepods-besteffort-podc6fe558f_51a1_457d_9f1c_8b904b1cc982.slice. Jan 29 16:09:18.100580 kubelet[3498]: I0129 16:09:18.100534 3498 topology_manager.go:215] "Topology Admit Handler" podUID="775449fb-d9b9-45ca-b745-d4770c3cbb45" podNamespace="kube-system" podName="cilium-gnjrp" Jan 29 16:09:18.111915 systemd[1]: Created slice kubepods-burstable-pod775449fb_d9b9_45ca_b745_d4770c3cbb45.slice - libcontainer container kubepods-burstable-pod775449fb_d9b9_45ca_b745_d4770c3cbb45.slice. Jan 29 16:09:18.189896 kubelet[3498]: I0129 16:09:18.189746 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c6fe558f-51a1-457d-9f1c-8b904b1cc982-kube-proxy\") pod \"kube-proxy-9g57t\" (UID: \"c6fe558f-51a1-457d-9f1c-8b904b1cc982\") " pod="kube-system/kube-proxy-9g57t" Jan 29 16:09:18.189896 kubelet[3498]: I0129 16:09:18.189791 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6fe558f-51a1-457d-9f1c-8b904b1cc982-lib-modules\") pod \"kube-proxy-9g57t\" (UID: \"c6fe558f-51a1-457d-9f1c-8b904b1cc982\") " pod="kube-system/kube-proxy-9g57t" Jan 29 16:09:18.189896 kubelet[3498]: I0129 16:09:18.189814 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6fe558f-51a1-457d-9f1c-8b904b1cc982-xtables-lock\") pod \"kube-proxy-9g57t\" (UID: \"c6fe558f-51a1-457d-9f1c-8b904b1cc982\") " pod="kube-system/kube-proxy-9g57t" Jan 29 16:09:18.189896 kubelet[3498]: I0129 16:09:18.189830 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6mmz\" (UniqueName: \"kubernetes.io/projected/c6fe558f-51a1-457d-9f1c-8b904b1cc982-kube-api-access-t6mmz\") pod \"kube-proxy-9g57t\" (UID: \"c6fe558f-51a1-457d-9f1c-8b904b1cc982\") " pod="kube-system/kube-proxy-9g57t" Jan 29 16:09:18.290166 kubelet[3498]: I0129 16:09:18.290072 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-cilium-cgroup\") pod \"cilium-gnjrp\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " pod="kube-system/cilium-gnjrp" Jan 29 16:09:18.290166 kubelet[3498]: I0129 16:09:18.290165 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-lib-modules\") pod \"cilium-gnjrp\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " pod="kube-system/cilium-gnjrp" Jan 29 16:09:18.290685 kubelet[3498]: I0129 16:09:18.290210 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-etc-cni-netd\") pod \"cilium-gnjrp\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " pod="kube-system/cilium-gnjrp" Jan 29 16:09:18.290685 kubelet[3498]: I0129 16:09:18.290233 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-host-proc-sys-net\") pod \"cilium-gnjrp\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " pod="kube-system/cilium-gnjrp" Jan 29 16:09:18.290685 kubelet[3498]: I0129 16:09:18.290258 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/775449fb-d9b9-45ca-b745-d4770c3cbb45-cilium-config-path\") pod \"cilium-gnjrp\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " pod="kube-system/cilium-gnjrp" Jan 29 16:09:18.292143 kubelet[3498]: I0129 16:09:18.290816 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-cni-path\") pod \"cilium-gnjrp\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " pod="kube-system/cilium-gnjrp" Jan 29 16:09:18.292143 kubelet[3498]: I0129 16:09:18.290885 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-cilium-run\") pod \"cilium-gnjrp\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " pod="kube-system/cilium-gnjrp" Jan 29 16:09:18.292143 kubelet[3498]: I0129 16:09:18.290938 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-bpf-maps\") pod \"cilium-gnjrp\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " pod="kube-system/cilium-gnjrp" Jan 29 16:09:18.292143 kubelet[3498]: I0129 16:09:18.290956 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/775449fb-d9b9-45ca-b745-d4770c3cbb45-clustermesh-secrets\") pod \"cilium-gnjrp\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " pod="kube-system/cilium-gnjrp" Jan 29 16:09:18.292143 kubelet[3498]: I0129 16:09:18.290974 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/775449fb-d9b9-45ca-b745-d4770c3cbb45-hubble-tls\") pod \"cilium-gnjrp\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " pod="kube-system/cilium-gnjrp" Jan 29 16:09:18.292143 kubelet[3498]: I0129 16:09:18.290992 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnzsk\" (UniqueName: \"kubernetes.io/projected/775449fb-d9b9-45ca-b745-d4770c3cbb45-kube-api-access-vnzsk\") pod \"cilium-gnjrp\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " pod="kube-system/cilium-gnjrp" Jan 29 16:09:18.292368 kubelet[3498]: I0129 16:09:18.291032 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-hostproc\") pod \"cilium-gnjrp\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " pod="kube-system/cilium-gnjrp" Jan 29 16:09:18.292368 kubelet[3498]: I0129 16:09:18.291051 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-xtables-lock\") pod \"cilium-gnjrp\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " pod="kube-system/cilium-gnjrp" Jan 29 16:09:18.292368 kubelet[3498]: I0129 16:09:18.291067 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-host-proc-sys-kernel\") pod \"cilium-gnjrp\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " pod="kube-system/cilium-gnjrp" Jan 29 16:09:18.354752 kubelet[3498]: I0129 16:09:18.354606 3498 topology_manager.go:215] "Topology Admit Handler" podUID="df400005-747d-4fa7-a2e0-7be1b35f4388" podNamespace="kube-system" podName="cilium-operator-599987898-k67v5" Jan 29 16:09:18.364817 systemd[1]: Created slice kubepods-besteffort-poddf400005_747d_4fa7_a2e0_7be1b35f4388.slice - libcontainer container kubepods-besteffort-poddf400005_747d_4fa7_a2e0_7be1b35f4388.slice. Jan 29 16:09:18.392339 kubelet[3498]: I0129 16:09:18.392289 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df400005-747d-4fa7-a2e0-7be1b35f4388-cilium-config-path\") pod \"cilium-operator-599987898-k67v5\" (UID: \"df400005-747d-4fa7-a2e0-7be1b35f4388\") " pod="kube-system/cilium-operator-599987898-k67v5" Jan 29 16:09:18.392489 kubelet[3498]: I0129 16:09:18.392397 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc2vk\" (UniqueName: \"kubernetes.io/projected/df400005-747d-4fa7-a2e0-7be1b35f4388-kube-api-access-kc2vk\") pod \"cilium-operator-599987898-k67v5\" (UID: \"df400005-747d-4fa7-a2e0-7be1b35f4388\") " pod="kube-system/cilium-operator-599987898-k67v5" Jan 29 16:09:18.408381 containerd[1755]: time="2025-01-29T16:09:18.408336326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9g57t,Uid:c6fe558f-51a1-457d-9f1c-8b904b1cc982,Namespace:kube-system,Attempt:0,}" Jan 29 16:09:18.418070 containerd[1755]: time="2025-01-29T16:09:18.418030816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gnjrp,Uid:775449fb-d9b9-45ca-b745-d4770c3cbb45,Namespace:kube-system,Attempt:0,}" Jan 29 16:09:18.472241 containerd[1755]: time="2025-01-29T16:09:18.472153473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:09:18.472863 containerd[1755]: time="2025-01-29T16:09:18.472801234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:09:18.473947 containerd[1755]: time="2025-01-29T16:09:18.473805515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:18.474296 containerd[1755]: time="2025-01-29T16:09:18.474215516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:18.478527 containerd[1755]: time="2025-01-29T16:09:18.478350920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:09:18.478527 containerd[1755]: time="2025-01-29T16:09:18.478403920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:09:18.478527 containerd[1755]: time="2025-01-29T16:09:18.478414800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:18.478527 containerd[1755]: time="2025-01-29T16:09:18.478486560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:18.492359 systemd[1]: Started cri-containerd-9d24218810ea984cc7321a28466edf4789695a0b89017f2361735752dd089bf6.scope - libcontainer container 9d24218810ea984cc7321a28466edf4789695a0b89017f2361735752dd089bf6. Jan 29 16:09:18.510269 systemd[1]: Started cri-containerd-f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136.scope - libcontainer container f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136. Jan 29 16:09:18.543841 containerd[1755]: time="2025-01-29T16:09:18.543804829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gnjrp,Uid:775449fb-d9b9-45ca-b745-d4770c3cbb45,Namespace:kube-system,Attempt:0,} returns sandbox id \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\"" Jan 29 16:09:18.547427 containerd[1755]: time="2025-01-29T16:09:18.547383633Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:09:18.549682 containerd[1755]: time="2025-01-29T16:09:18.549643156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9g57t,Uid:c6fe558f-51a1-457d-9f1c-8b904b1cc982,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d24218810ea984cc7321a28466edf4789695a0b89017f2361735752dd089bf6\"" Jan 29 16:09:18.554303 containerd[1755]: time="2025-01-29T16:09:18.554131920Z" level=info msg="CreateContainer within sandbox \"9d24218810ea984cc7321a28466edf4789695a0b89017f2361735752dd089bf6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:09:18.595630 containerd[1755]: time="2025-01-29T16:09:18.595547804Z" level=info msg="CreateContainer within sandbox \"9d24218810ea984cc7321a28466edf4789695a0b89017f2361735752dd089bf6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cdcec9a37d30e10af69caa9fde81b22ebb10324b63c7c7fcae8884323bc3abcf\"" Jan 29 16:09:18.597152 containerd[1755]: time="2025-01-29T16:09:18.597034246Z" level=info msg="StartContainer for \"cdcec9a37d30e10af69caa9fde81b22ebb10324b63c7c7fcae8884323bc3abcf\"" Jan 29 16:09:18.628259 systemd[1]: Started cri-containerd-cdcec9a37d30e10af69caa9fde81b22ebb10324b63c7c7fcae8884323bc3abcf.scope - libcontainer container cdcec9a37d30e10af69caa9fde81b22ebb10324b63c7c7fcae8884323bc3abcf. Jan 29 16:09:18.657947 containerd[1755]: time="2025-01-29T16:09:18.657894110Z" level=info msg="StartContainer for \"cdcec9a37d30e10af69caa9fde81b22ebb10324b63c7c7fcae8884323bc3abcf\" returns successfully" Jan 29 16:09:18.669928 containerd[1755]: time="2025-01-29T16:09:18.669891803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-k67v5,Uid:df400005-747d-4fa7-a2e0-7be1b35f4388,Namespace:kube-system,Attempt:0,}" Jan 29 16:09:18.724838 containerd[1755]: time="2025-01-29T16:09:18.724574741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:09:18.725293 containerd[1755]: time="2025-01-29T16:09:18.725215422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:09:18.725479 containerd[1755]: time="2025-01-29T16:09:18.725403782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:18.725685 containerd[1755]: time="2025-01-29T16:09:18.725637902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:18.751006 systemd[1]: Started cri-containerd-6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a.scope - libcontainer container 6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a. Jan 29 16:09:18.784404 containerd[1755]: time="2025-01-29T16:09:18.784328125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-k67v5,Uid:df400005-747d-4fa7-a2e0-7be1b35f4388,Namespace:kube-system,Attempt:0,} returns sandbox id \"6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a\"" Jan 29 16:09:23.041870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427067027.mount: Deactivated successfully. Jan 29 16:09:26.218228 containerd[1755]: time="2025-01-29T16:09:26.218172538Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:26.221722 containerd[1755]: time="2025-01-29T16:09:26.221663301Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 16:09:26.226511 containerd[1755]: time="2025-01-29T16:09:26.226460466Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:26.228679 containerd[1755]: time="2025-01-29T16:09:26.228200988Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.680773555s" Jan 29 16:09:26.228679 containerd[1755]: time="2025-01-29T16:09:26.228240308Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 16:09:26.230391 containerd[1755]: time="2025-01-29T16:09:26.229807790Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 16:09:26.232908 containerd[1755]: time="2025-01-29T16:09:26.232850633Z" level=info msg="CreateContainer within sandbox \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:09:26.266725 containerd[1755]: time="2025-01-29T16:09:26.266620669Z" level=info msg="CreateContainer within sandbox \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466\"" Jan 29 16:09:26.268170 containerd[1755]: time="2025-01-29T16:09:26.268127231Z" level=info msg="StartContainer for \"ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466\"" Jan 29 16:09:26.294627 systemd[1]: run-containerd-runc-k8s.io-ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466-runc.MRnroO.mount: Deactivated successfully. Jan 29 16:09:26.307253 systemd[1]: Started cri-containerd-ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466.scope - libcontainer container ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466. Jan 29 16:09:26.342382 containerd[1755]: time="2025-01-29T16:09:26.341964549Z" level=info msg="StartContainer for \"ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466\" returns successfully" Jan 29 16:09:26.350108 systemd[1]: cri-containerd-ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466.scope: Deactivated successfully. Jan 29 16:09:26.443593 kubelet[3498]: I0129 16:09:26.443356 3498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9g57t" podStartSLOduration=8.443331937 podStartE2EDuration="8.443331937s" podCreationTimestamp="2025-01-29 16:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:09:19.425584205 +0000 UTC m=+16.210417718" watchObservedRunningTime="2025-01-29 16:09:26.443331937 +0000 UTC m=+23.228165370" Jan 29 16:09:26.572062 containerd[1755]: time="2025-01-29T16:09:26.571848474Z" level=info msg="shim disconnected" id=ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466 namespace=k8s.io Jan 29 16:09:26.572062 containerd[1755]: time="2025-01-29T16:09:26.571900434Z" level=warning msg="cleaning up after shim disconnected" id=ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466 namespace=k8s.io Jan 29 16:09:26.572062 containerd[1755]: time="2025-01-29T16:09:26.571908194Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:09:27.254204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466-rootfs.mount: Deactivated successfully. Jan 29 16:09:27.428863 containerd[1755]: time="2025-01-29T16:09:27.428809585Z" level=info msg="CreateContainer within sandbox \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:09:27.465405 containerd[1755]: time="2025-01-29T16:09:27.465358983Z" level=info msg="CreateContainer within sandbox \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03\"" Jan 29 16:09:27.466549 containerd[1755]: time="2025-01-29T16:09:27.466513545Z" level=info msg="StartContainer for \"f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03\"" Jan 29 16:09:27.503358 systemd[1]: Started cri-containerd-f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03.scope - libcontainer container f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03. Jan 29 16:09:27.537435 containerd[1755]: time="2025-01-29T16:09:27.537005220Z" level=info msg="StartContainer for \"f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03\" returns successfully" Jan 29 16:09:27.549496 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:09:27.549709 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:09:27.550594 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:09:27.557032 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:09:27.557238 systemd[1]: cri-containerd-f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03.scope: Deactivated successfully. Jan 29 16:09:27.575259 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:09:27.618377 containerd[1755]: time="2025-01-29T16:09:27.618247306Z" level=info msg="shim disconnected" id=f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03 namespace=k8s.io Jan 29 16:09:27.618377 containerd[1755]: time="2025-01-29T16:09:27.618302106Z" level=warning msg="cleaning up after shim disconnected" id=f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03 namespace=k8s.io Jan 29 16:09:27.618377 containerd[1755]: time="2025-01-29T16:09:27.618310586Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:09:28.254174 systemd[1]: run-containerd-runc-k8s.io-f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03-runc.2sPMsd.mount: Deactivated successfully. Jan 29 16:09:28.254287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03-rootfs.mount: Deactivated successfully. Jan 29 16:09:28.431876 containerd[1755]: time="2025-01-29T16:09:28.431837531Z" level=info msg="CreateContainer within sandbox \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:09:28.904254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount943989592.mount: Deactivated successfully. Jan 29 16:09:29.010296 containerd[1755]: time="2025-01-29T16:09:29.009905466Z" level=info msg="CreateContainer within sandbox \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec\"" Jan 29 16:09:29.011586 containerd[1755]: time="2025-01-29T16:09:29.011526347Z" level=info msg="StartContainer for \"28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec\"" Jan 29 16:09:29.052364 systemd[1]: Started cri-containerd-28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec.scope - libcontainer container 28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec. Jan 29 16:09:29.102711 systemd[1]: cri-containerd-28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec.scope: Deactivated successfully. Jan 29 16:09:29.106951 containerd[1755]: time="2025-01-29T16:09:29.105724047Z" level=info msg="StartContainer for \"28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec\" returns successfully" Jan 29 16:09:29.253970 systemd[1]: run-containerd-runc-k8s.io-28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec-runc.YdZyfZ.mount: Deactivated successfully. Jan 29 16:09:29.254069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec-rootfs.mount: Deactivated successfully. Jan 29 16:09:31.110729 containerd[1755]: time="2025-01-29T16:09:31.110476419Z" level=info msg="shim disconnected" id=28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec namespace=k8s.io Jan 29 16:09:31.110729 containerd[1755]: time="2025-01-29T16:09:31.110562219Z" level=warning msg="cleaning up after shim disconnected" id=28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec namespace=k8s.io Jan 29 16:09:31.110729 containerd[1755]: time="2025-01-29T16:09:31.110572579Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:09:31.196120 containerd[1755]: time="2025-01-29T16:09:31.195456429Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:31.199585 containerd[1755]: time="2025-01-29T16:09:31.199533033Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 16:09:31.204348 containerd[1755]: time="2025-01-29T16:09:31.204312759Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:31.206494 containerd[1755]: time="2025-01-29T16:09:31.205919280Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.97607497s" Jan 29 16:09:31.206494 containerd[1755]: time="2025-01-29T16:09:31.205956920Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 16:09:31.209203 containerd[1755]: time="2025-01-29T16:09:31.209162644Z" level=info msg="CreateContainer within sandbox \"6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 16:09:31.410757 containerd[1755]: time="2025-01-29T16:09:31.410700898Z" level=info msg="CreateContainer within sandbox \"6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2\"" Jan 29 16:09:31.412252 containerd[1755]: time="2025-01-29T16:09:31.412189420Z" level=info msg="StartContainer for \"ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2\"" Jan 29 16:09:31.446340 systemd[1]: Started cri-containerd-ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2.scope - libcontainer container ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2. Jan 29 16:09:31.456774 containerd[1755]: time="2025-01-29T16:09:31.456701267Z" level=info msg="CreateContainer within sandbox \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:09:31.544119 containerd[1755]: time="2025-01-29T16:09:31.543824160Z" level=info msg="StartContainer for \"ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2\" returns successfully" Jan 29 16:09:31.798494 containerd[1755]: time="2025-01-29T16:09:31.798437550Z" level=info msg="CreateContainer within sandbox \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556\"" Jan 29 16:09:31.799238 containerd[1755]: time="2025-01-29T16:09:31.799207191Z" level=info msg="StartContainer for \"70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556\"" Jan 29 16:09:31.828321 systemd[1]: Started cri-containerd-70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556.scope - libcontainer container 70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556. Jan 29 16:09:31.868199 systemd[1]: cri-containerd-70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556.scope: Deactivated successfully. Jan 29 16:09:31.872912 containerd[1755]: time="2025-01-29T16:09:31.872059788Z" level=info msg="StartContainer for \"70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556\" returns successfully" Jan 29 16:09:32.482647 kubelet[3498]: I0129 16:09:32.482533 3498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-k67v5" podStartSLOduration=2.061572482 podStartE2EDuration="14.482511277s" podCreationTimestamp="2025-01-29 16:09:18 +0000 UTC" firstStartedPulling="2025-01-29 16:09:18.785873926 +0000 UTC m=+15.570707399" lastFinishedPulling="2025-01-29 16:09:31.206812721 +0000 UTC m=+27.991646194" observedRunningTime="2025-01-29 16:09:32.464545218 +0000 UTC m=+29.249378691" watchObservedRunningTime="2025-01-29 16:09:32.482511277 +0000 UTC m=+29.267344750" Jan 29 16:09:32.992306 containerd[1755]: time="2025-01-29T16:09:32.992066061Z" level=info msg="shim disconnected" id=70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556 namespace=k8s.io Jan 29 16:09:32.992306 containerd[1755]: time="2025-01-29T16:09:32.992153141Z" level=warning msg="cleaning up after shim disconnected" id=70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556 namespace=k8s.io Jan 29 16:09:32.992306 containerd[1755]: time="2025-01-29T16:09:32.992162021Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:09:33.464899 containerd[1755]: time="2025-01-29T16:09:33.464776707Z" level=info msg="CreateContainer within sandbox \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:09:33.498527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount173340552.mount: Deactivated successfully. Jan 29 16:09:33.510752 containerd[1755]: time="2025-01-29T16:09:33.510656426Z" level=info msg="CreateContainer within sandbox \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794\"" Jan 29 16:09:33.511834 containerd[1755]: time="2025-01-29T16:09:33.511363747Z" level=info msg="StartContainer for \"d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794\"" Jan 29 16:09:33.549310 systemd[1]: Started cri-containerd-d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794.scope - libcontainer container d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794. Jan 29 16:09:33.587354 containerd[1755]: time="2025-01-29T16:09:33.587299492Z" level=info msg="StartContainer for \"d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794\" returns successfully" Jan 29 16:09:33.668383 kubelet[3498]: I0129 16:09:33.668346 3498 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 16:09:33.708552 kubelet[3498]: I0129 16:09:33.707959 3498 topology_manager.go:215] "Topology Admit Handler" podUID="4fb159eb-1810-4a8f-be67-3e7f56059a30" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dsn22" Jan 29 16:09:33.708552 kubelet[3498]: I0129 16:09:33.708287 3498 topology_manager.go:215] "Topology Admit Handler" podUID="aded3c45-b21b-45f0-bfee-29eb73a91b20" podNamespace="kube-system" podName="coredns-7db6d8ff4d-g6q5d" Jan 29 16:09:33.718145 systemd[1]: Created slice kubepods-burstable-pod4fb159eb_1810_4a8f_be67_3e7f56059a30.slice - libcontainer container kubepods-burstable-pod4fb159eb_1810_4a8f_be67_3e7f56059a30.slice. Jan 29 16:09:33.727363 systemd[1]: Created slice kubepods-burstable-podaded3c45_b21b_45f0_bfee_29eb73a91b20.slice - libcontainer container kubepods-burstable-podaded3c45_b21b_45f0_bfee_29eb73a91b20.slice. Jan 29 16:09:33.893692 kubelet[3498]: I0129 16:09:33.893636 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs47f\" (UniqueName: \"kubernetes.io/projected/4fb159eb-1810-4a8f-be67-3e7f56059a30-kube-api-access-gs47f\") pod \"coredns-7db6d8ff4d-dsn22\" (UID: \"4fb159eb-1810-4a8f-be67-3e7f56059a30\") " pod="kube-system/coredns-7db6d8ff4d-dsn22" Jan 29 16:09:33.893849 kubelet[3498]: I0129 16:09:33.893709 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aded3c45-b21b-45f0-bfee-29eb73a91b20-config-volume\") pod \"coredns-7db6d8ff4d-g6q5d\" (UID: \"aded3c45-b21b-45f0-bfee-29eb73a91b20\") " pod="kube-system/coredns-7db6d8ff4d-g6q5d" Jan 29 16:09:33.893849 kubelet[3498]: I0129 16:09:33.893766 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fb159eb-1810-4a8f-be67-3e7f56059a30-config-volume\") pod \"coredns-7db6d8ff4d-dsn22\" (UID: \"4fb159eb-1810-4a8f-be67-3e7f56059a30\") " pod="kube-system/coredns-7db6d8ff4d-dsn22" Jan 29 16:09:33.893849 kubelet[3498]: I0129 16:09:33.893789 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdgxl\" (UniqueName: \"kubernetes.io/projected/aded3c45-b21b-45f0-bfee-29eb73a91b20-kube-api-access-gdgxl\") pod \"coredns-7db6d8ff4d-g6q5d\" (UID: \"aded3c45-b21b-45f0-bfee-29eb73a91b20\") " pod="kube-system/coredns-7db6d8ff4d-g6q5d" Jan 29 16:09:34.024628 containerd[1755]: time="2025-01-29T16:09:34.024516387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dsn22,Uid:4fb159eb-1810-4a8f-be67-3e7f56059a30,Namespace:kube-system,Attempt:0,}" Jan 29 16:09:34.032077 containerd[1755]: time="2025-01-29T16:09:34.032043954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g6q5d,Uid:aded3c45-b21b-45f0-bfee-29eb73a91b20,Namespace:kube-system,Attempt:0,}" Jan 29 16:09:34.485776 kubelet[3498]: I0129 16:09:34.485706 3498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gnjrp" podStartSLOduration=8.802166786 podStartE2EDuration="16.485687383s" podCreationTimestamp="2025-01-29 16:09:18 +0000 UTC" firstStartedPulling="2025-01-29 16:09:18.545864512 +0000 UTC m=+15.330697945" lastFinishedPulling="2025-01-29 16:09:26.229385069 +0000 UTC m=+23.014218542" observedRunningTime="2025-01-29 16:09:34.485502623 +0000 UTC m=+31.270336096" watchObservedRunningTime="2025-01-29 16:09:34.485687383 +0000 UTC m=+31.270520856" Jan 29 16:09:35.919558 systemd-networkd[1516]: cilium_host: Link UP Jan 29 16:09:35.919670 systemd-networkd[1516]: cilium_net: Link UP Jan 29 16:09:35.919807 systemd-networkd[1516]: cilium_net: Gained carrier Jan 29 16:09:35.919921 systemd-networkd[1516]: cilium_host: Gained carrier Jan 29 16:09:35.920003 systemd-networkd[1516]: cilium_net: Gained IPv6LL Jan 29 16:09:35.920936 systemd-networkd[1516]: cilium_host: Gained IPv6LL Jan 29 16:09:36.101782 systemd-networkd[1516]: cilium_vxlan: Link UP Jan 29 16:09:36.101789 systemd-networkd[1516]: cilium_vxlan: Gained carrier Jan 29 16:09:36.384195 kernel: NET: Registered PF_ALG protocol family Jan 29 16:09:37.081015 systemd-networkd[1516]: lxc_health: Link UP Jan 29 16:09:37.091903 systemd-networkd[1516]: lxc_health: Gained carrier Jan 29 16:09:37.362288 systemd-networkd[1516]: cilium_vxlan: Gained IPv6LL Jan 29 16:09:37.628841 systemd-networkd[1516]: lxc177b7043f672: Link UP Jan 29 16:09:37.640113 kernel: eth0: renamed from tmpcf628 Jan 29 16:09:37.645589 systemd-networkd[1516]: lxc177b7043f672: Gained carrier Jan 29 16:09:37.664114 systemd-networkd[1516]: lxc5326eba4e28d: Link UP Jan 29 16:09:37.678687 kernel: eth0: renamed from tmpbd193 Jan 29 16:09:37.686733 systemd-networkd[1516]: lxc5326eba4e28d: Gained carrier Jan 29 16:09:38.834218 systemd-networkd[1516]: lxc_health: Gained IPv6LL Jan 29 16:09:39.281213 systemd-networkd[1516]: lxc5326eba4e28d: Gained IPv6LL Jan 29 16:09:39.410192 systemd-networkd[1516]: lxc177b7043f672: Gained IPv6LL Jan 29 16:09:41.309316 containerd[1755]: time="2025-01-29T16:09:41.309102732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:09:41.309316 containerd[1755]: time="2025-01-29T16:09:41.309156052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:09:41.309316 containerd[1755]: time="2025-01-29T16:09:41.309170852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:41.309316 containerd[1755]: time="2025-01-29T16:09:41.309237732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:41.326839 containerd[1755]: time="2025-01-29T16:09:41.326139030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:09:41.326839 containerd[1755]: time="2025-01-29T16:09:41.326190230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:09:41.326839 containerd[1755]: time="2025-01-29T16:09:41.326204950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:41.326839 containerd[1755]: time="2025-01-29T16:09:41.326305350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:41.366285 systemd[1]: Started cri-containerd-cf6282b838ee0975b92b5c6e83f086e6ad8ce66411602a1af8537cf388194a9c.scope - libcontainer container cf6282b838ee0975b92b5c6e83f086e6ad8ce66411602a1af8537cf388194a9c. Jan 29 16:09:41.371254 systemd[1]: Started cri-containerd-bd1939a8d6e7e12cb50858c8185bf0217045d5a474a27a8bf243dc79f3373b95.scope - libcontainer container bd1939a8d6e7e12cb50858c8185bf0217045d5a474a27a8bf243dc79f3373b95. Jan 29 16:09:41.409066 containerd[1755]: time="2025-01-29T16:09:41.408997157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g6q5d,Uid:aded3c45-b21b-45f0-bfee-29eb73a91b20,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd1939a8d6e7e12cb50858c8185bf0217045d5a474a27a8bf243dc79f3373b95\"" Jan 29 16:09:41.415004 containerd[1755]: time="2025-01-29T16:09:41.414955563Z" level=info msg="CreateContainer within sandbox \"bd1939a8d6e7e12cb50858c8185bf0217045d5a474a27a8bf243dc79f3373b95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:09:41.442478 containerd[1755]: time="2025-01-29T16:09:41.442399952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dsn22,Uid:4fb159eb-1810-4a8f-be67-3e7f56059a30,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf6282b838ee0975b92b5c6e83f086e6ad8ce66411602a1af8537cf388194a9c\"" Jan 29 16:09:41.447560 containerd[1755]: time="2025-01-29T16:09:41.447400597Z" level=info msg="CreateContainer within sandbox \"cf6282b838ee0975b92b5c6e83f086e6ad8ce66411602a1af8537cf388194a9c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:09:41.471858 containerd[1755]: time="2025-01-29T16:09:41.471732143Z" level=info msg="CreateContainer within sandbox \"bd1939a8d6e7e12cb50858c8185bf0217045d5a474a27a8bf243dc79f3373b95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aecd98904355ea0c0760365cd1c9f813ed3bb63cfea271ff2b4b39bb40d374a6\"" Jan 29 16:09:41.472899 containerd[1755]: time="2025-01-29T16:09:41.472814224Z" level=info msg="StartContainer for \"aecd98904355ea0c0760365cd1c9f813ed3bb63cfea271ff2b4b39bb40d374a6\"" Jan 29 16:09:41.508554 containerd[1755]: time="2025-01-29T16:09:41.508428701Z" level=info msg="CreateContainer within sandbox \"cf6282b838ee0975b92b5c6e83f086e6ad8ce66411602a1af8537cf388194a9c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6674184900678c7c90bf4ce21413604259e91461b58d91f2c2c648aa31fe8236\"" Jan 29 16:09:41.510280 containerd[1755]: time="2025-01-29T16:09:41.510207783Z" level=info msg="StartContainer for \"6674184900678c7c90bf4ce21413604259e91461b58d91f2c2c648aa31fe8236\"" Jan 29 16:09:41.511232 systemd[1]: Started cri-containerd-aecd98904355ea0c0760365cd1c9f813ed3bb63cfea271ff2b4b39bb40d374a6.scope - libcontainer container aecd98904355ea0c0760365cd1c9f813ed3bb63cfea271ff2b4b39bb40d374a6. Jan 29 16:09:41.544283 systemd[1]: Started cri-containerd-6674184900678c7c90bf4ce21413604259e91461b58d91f2c2c648aa31fe8236.scope - libcontainer container 6674184900678c7c90bf4ce21413604259e91461b58d91f2c2c648aa31fe8236. Jan 29 16:09:41.554151 containerd[1755]: time="2025-01-29T16:09:41.554102229Z" level=info msg="StartContainer for \"aecd98904355ea0c0760365cd1c9f813ed3bb63cfea271ff2b4b39bb40d374a6\" returns successfully" Jan 29 16:09:41.580984 containerd[1755]: time="2025-01-29T16:09:41.580861777Z" level=info msg="StartContainer for \"6674184900678c7c90bf4ce21413604259e91461b58d91f2c2c648aa31fe8236\" returns successfully" Jan 29 16:09:42.509262 kubelet[3498]: I0129 16:09:42.509201 3498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-g6q5d" podStartSLOduration=24.509025311 podStartE2EDuration="24.509025311s" podCreationTimestamp="2025-01-29 16:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:09:42.50762115 +0000 UTC m=+39.292454623" watchObservedRunningTime="2025-01-29 16:09:42.509025311 +0000 UTC m=+39.293858784" Jan 29 16:09:43.516583 kubelet[3498]: I0129 16:09:43.516504 3498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dsn22" podStartSLOduration=25.516489049 podStartE2EDuration="25.516489049s" podCreationTimestamp="2025-01-29 16:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:09:42.523253846 +0000 UTC m=+39.308087319" watchObservedRunningTime="2025-01-29 16:09:43.516489049 +0000 UTC m=+40.301322522" Jan 29 16:11:01.527495 systemd[1]: Started sshd@7-10.200.20.10:22-10.200.16.10:36356.service - OpenSSH per-connection server daemon (10.200.16.10:36356). Jan 29 16:11:01.944790 sshd[4879]: Accepted publickey for core from 10.200.16.10 port 36356 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:01.946175 sshd-session[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:01.951602 systemd-logind[1731]: New session 10 of user core. Jan 29 16:11:01.962437 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:11:02.316510 sshd[4881]: Connection closed by 10.200.16.10 port 36356 Jan 29 16:11:02.317080 sshd-session[4879]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:02.320615 systemd[1]: sshd@7-10.200.20.10:22-10.200.16.10:36356.service: Deactivated successfully. Jan 29 16:11:02.322427 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:11:02.323345 systemd-logind[1731]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:11:02.324469 systemd-logind[1731]: Removed session 10. Jan 29 16:11:07.409334 systemd[1]: Started sshd@8-10.200.20.10:22-10.200.16.10:51986.service - OpenSSH per-connection server daemon (10.200.16.10:51986). Jan 29 16:11:07.844035 sshd[4896]: Accepted publickey for core from 10.200.16.10 port 51986 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:07.845365 sshd-session[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:07.849159 systemd-logind[1731]: New session 11 of user core. Jan 29 16:11:07.854232 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:11:08.220806 sshd[4898]: Connection closed by 10.200.16.10 port 51986 Jan 29 16:11:08.221332 sshd-session[4896]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:08.224730 systemd[1]: sshd@8-10.200.20.10:22-10.200.16.10:51986.service: Deactivated successfully. Jan 29 16:11:08.227625 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:11:08.229043 systemd-logind[1731]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:11:08.230718 systemd-logind[1731]: Removed session 11. Jan 29 16:11:13.301311 systemd[1]: Started sshd@9-10.200.20.10:22-10.200.16.10:51998.service - OpenSSH per-connection server daemon (10.200.16.10:51998). Jan 29 16:11:13.720829 sshd[4911]: Accepted publickey for core from 10.200.16.10 port 51998 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:13.721991 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:13.727864 systemd-logind[1731]: New session 12 of user core. Jan 29 16:11:13.736431 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:11:14.081941 sshd[4913]: Connection closed by 10.200.16.10 port 51998 Jan 29 16:11:14.082543 sshd-session[4911]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:14.085956 systemd[1]: sshd@9-10.200.20.10:22-10.200.16.10:51998.service: Deactivated successfully. Jan 29 16:11:14.088163 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:11:14.089191 systemd-logind[1731]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:11:14.091264 systemd-logind[1731]: Removed session 12. Jan 29 16:11:19.162359 systemd[1]: Started sshd@10-10.200.20.10:22-10.200.16.10:43130.service - OpenSSH per-connection server daemon (10.200.16.10:43130). Jan 29 16:11:19.572341 sshd[4928]: Accepted publickey for core from 10.200.16.10 port 43130 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:19.573652 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:19.578827 systemd-logind[1731]: New session 13 of user core. Jan 29 16:11:19.584246 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:11:19.936149 sshd[4930]: Connection closed by 10.200.16.10 port 43130 Jan 29 16:11:19.936549 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:19.941711 systemd[1]: sshd@10-10.200.20.10:22-10.200.16.10:43130.service: Deactivated successfully. Jan 29 16:11:19.941986 systemd-logind[1731]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:11:19.943804 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:11:19.945836 systemd-logind[1731]: Removed session 13. Jan 29 16:11:20.019319 systemd[1]: Started sshd@11-10.200.20.10:22-10.200.16.10:43140.service - OpenSSH per-connection server daemon (10.200.16.10:43140). Jan 29 16:11:20.437196 sshd[4942]: Accepted publickey for core from 10.200.16.10 port 43140 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:20.438858 sshd-session[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:20.443581 systemd-logind[1731]: New session 14 of user core. Jan 29 16:11:20.451255 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:11:20.833638 sshd[4945]: Connection closed by 10.200.16.10 port 43140 Jan 29 16:11:20.834218 sshd-session[4942]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:20.838011 systemd-logind[1731]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:11:20.838840 systemd[1]: sshd@11-10.200.20.10:22-10.200.16.10:43140.service: Deactivated successfully. Jan 29 16:11:20.840765 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:11:20.842645 systemd-logind[1731]: Removed session 14. Jan 29 16:11:20.916389 systemd[1]: Started sshd@12-10.200.20.10:22-10.200.16.10:43148.service - OpenSSH per-connection server daemon (10.200.16.10:43148). Jan 29 16:11:21.338143 sshd[4955]: Accepted publickey for core from 10.200.16.10 port 43148 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:21.340056 sshd-session[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:21.344286 systemd-logind[1731]: New session 15 of user core. Jan 29 16:11:21.353260 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:11:21.739301 sshd[4957]: Connection closed by 10.200.16.10 port 43148 Jan 29 16:11:21.739836 sshd-session[4955]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:21.743486 systemd[1]: sshd@12-10.200.20.10:22-10.200.16.10:43148.service: Deactivated successfully. Jan 29 16:11:21.745666 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:11:21.747738 systemd-logind[1731]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:11:21.748574 systemd-logind[1731]: Removed session 15. Jan 29 16:11:26.824333 systemd[1]: Started sshd@13-10.200.20.10:22-10.200.16.10:56098.service - OpenSSH per-connection server daemon (10.200.16.10:56098). Jan 29 16:11:27.256971 sshd[4968]: Accepted publickey for core from 10.200.16.10 port 56098 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:27.258205 sshd-session[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:27.262757 systemd-logind[1731]: New session 16 of user core. Jan 29 16:11:27.266252 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:11:27.628492 sshd[4970]: Connection closed by 10.200.16.10 port 56098 Jan 29 16:11:27.629052 sshd-session[4968]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:27.632317 systemd-logind[1731]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:11:27.633395 systemd[1]: sshd@13-10.200.20.10:22-10.200.16.10:56098.service: Deactivated successfully. Jan 29 16:11:27.635355 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:11:27.636738 systemd-logind[1731]: Removed session 16. Jan 29 16:11:27.708331 systemd[1]: Started sshd@14-10.200.20.10:22-10.200.16.10:56114.service - OpenSSH per-connection server daemon (10.200.16.10:56114). Jan 29 16:11:28.120425 sshd[4981]: Accepted publickey for core from 10.200.16.10 port 56114 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:28.121745 sshd-session[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:28.125904 systemd-logind[1731]: New session 17 of user core. Jan 29 16:11:28.131232 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:11:28.510438 sshd[4983]: Connection closed by 10.200.16.10 port 56114 Jan 29 16:11:28.510952 sshd-session[4981]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:28.514376 systemd[1]: sshd@14-10.200.20.10:22-10.200.16.10:56114.service: Deactivated successfully. Jan 29 16:11:28.516342 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:11:28.517387 systemd-logind[1731]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:11:28.518625 systemd-logind[1731]: Removed session 17. Jan 29 16:11:28.591322 systemd[1]: Started sshd@15-10.200.20.10:22-10.200.16.10:56120.service - OpenSSH per-connection server daemon (10.200.16.10:56120). Jan 29 16:11:29.009845 sshd[4993]: Accepted publickey for core from 10.200.16.10 port 56120 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:29.011178 sshd-session[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:29.015428 systemd-logind[1731]: New session 18 of user core. Jan 29 16:11:29.022247 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 16:11:30.937642 sshd[4995]: Connection closed by 10.200.16.10 port 56120 Jan 29 16:11:30.938019 sshd-session[4993]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:30.941775 systemd[1]: sshd@15-10.200.20.10:22-10.200.16.10:56120.service: Deactivated successfully. Jan 29 16:11:30.943577 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 16:11:30.944578 systemd-logind[1731]: Session 18 logged out. Waiting for processes to exit. Jan 29 16:11:30.945490 systemd-logind[1731]: Removed session 18. Jan 29 16:11:31.022384 systemd[1]: Started sshd@16-10.200.20.10:22-10.200.16.10:56126.service - OpenSSH per-connection server daemon (10.200.16.10:56126). Jan 29 16:11:31.435770 sshd[5012]: Accepted publickey for core from 10.200.16.10 port 56126 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:31.437225 sshd-session[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:31.441164 systemd-logind[1731]: New session 19 of user core. Jan 29 16:11:31.447236 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 16:11:31.903505 sshd[5014]: Connection closed by 10.200.16.10 port 56126 Jan 29 16:11:31.903417 sshd-session[5012]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:31.906889 systemd-logind[1731]: Session 19 logged out. Waiting for processes to exit. Jan 29 16:11:31.907401 systemd[1]: sshd@16-10.200.20.10:22-10.200.16.10:56126.service: Deactivated successfully. Jan 29 16:11:31.910205 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 16:11:31.911548 systemd-logind[1731]: Removed session 19. Jan 29 16:11:31.985327 systemd[1]: Started sshd@17-10.200.20.10:22-10.200.16.10:56132.service - OpenSSH per-connection server daemon (10.200.16.10:56132). Jan 29 16:11:32.404909 sshd[5024]: Accepted publickey for core from 10.200.16.10 port 56132 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:32.406664 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:32.411239 systemd-logind[1731]: New session 20 of user core. Jan 29 16:11:32.418316 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 16:11:32.769359 sshd[5026]: Connection closed by 10.200.16.10 port 56132 Jan 29 16:11:32.769889 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:32.772641 systemd[1]: sshd@17-10.200.20.10:22-10.200.16.10:56132.service: Deactivated successfully. Jan 29 16:11:32.774590 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 16:11:32.776218 systemd-logind[1731]: Session 20 logged out. Waiting for processes to exit. Jan 29 16:11:32.777306 systemd-logind[1731]: Removed session 20. Jan 29 16:11:37.851342 systemd[1]: Started sshd@18-10.200.20.10:22-10.200.16.10:44868.service - OpenSSH per-connection server daemon (10.200.16.10:44868). Jan 29 16:11:38.285277 sshd[5041]: Accepted publickey for core from 10.200.16.10 port 44868 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:38.286518 sshd-session[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:38.291004 systemd-logind[1731]: New session 21 of user core. Jan 29 16:11:38.299219 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 16:11:38.656512 sshd[5043]: Connection closed by 10.200.16.10 port 44868 Jan 29 16:11:38.657155 sshd-session[5041]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:38.659789 systemd-logind[1731]: Session 21 logged out. Waiting for processes to exit. Jan 29 16:11:38.660038 systemd[1]: sshd@18-10.200.20.10:22-10.200.16.10:44868.service: Deactivated successfully. Jan 29 16:11:38.661609 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 16:11:38.663718 systemd-logind[1731]: Removed session 21. Jan 29 16:11:43.740263 systemd[1]: Started sshd@19-10.200.20.10:22-10.200.16.10:44870.service - OpenSSH per-connection server daemon (10.200.16.10:44870). Jan 29 16:11:44.163189 sshd[5055]: Accepted publickey for core from 10.200.16.10 port 44870 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:44.164908 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:44.169313 systemd-logind[1731]: New session 22 of user core. Jan 29 16:11:44.176302 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 16:11:44.528594 sshd[5057]: Connection closed by 10.200.16.10 port 44870 Jan 29 16:11:44.527985 sshd-session[5055]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:44.530725 systemd-logind[1731]: Session 22 logged out. Waiting for processes to exit. Jan 29 16:11:44.531053 systemd[1]: sshd@19-10.200.20.10:22-10.200.16.10:44870.service: Deactivated successfully. Jan 29 16:11:44.532929 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 16:11:44.535072 systemd-logind[1731]: Removed session 22. Jan 29 16:11:49.607080 systemd[1]: Started sshd@20-10.200.20.10:22-10.200.16.10:51218.service - OpenSSH per-connection server daemon (10.200.16.10:51218). Jan 29 16:11:50.041179 sshd[5071]: Accepted publickey for core from 10.200.16.10 port 51218 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:50.042470 sshd-session[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:50.047956 systemd-logind[1731]: New session 23 of user core. Jan 29 16:11:50.053264 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 16:11:50.449181 sshd[5073]: Connection closed by 10.200.16.10 port 51218 Jan 29 16:11:50.449833 sshd-session[5071]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:50.453161 systemd[1]: sshd@20-10.200.20.10:22-10.200.16.10:51218.service: Deactivated successfully. Jan 29 16:11:50.455785 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 16:11:50.456903 systemd-logind[1731]: Session 23 logged out. Waiting for processes to exit. Jan 29 16:11:50.458353 systemd-logind[1731]: Removed session 23. Jan 29 16:11:50.534324 systemd[1]: Started sshd@21-10.200.20.10:22-10.200.16.10:51228.service - OpenSSH per-connection server daemon (10.200.16.10:51228). Jan 29 16:11:50.946680 sshd[5085]: Accepted publickey for core from 10.200.16.10 port 51228 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:50.947950 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:50.951815 systemd-logind[1731]: New session 24 of user core. Jan 29 16:11:50.958275 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 16:11:53.250699 containerd[1755]: time="2025-01-29T16:11:53.250640899Z" level=info msg="StopContainer for \"ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2\" with timeout 30 (s)" Jan 29 16:11:53.252309 containerd[1755]: time="2025-01-29T16:11:53.251527139Z" level=info msg="Stop container \"ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2\" with signal terminated" Jan 29 16:11:53.261967 containerd[1755]: time="2025-01-29T16:11:53.261925790Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:11:53.263790 systemd[1]: cri-containerd-ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2.scope: Deactivated successfully. Jan 29 16:11:53.271561 containerd[1755]: time="2025-01-29T16:11:53.271388720Z" level=info msg="StopContainer for \"d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794\" with timeout 2 (s)" Jan 29 16:11:53.271993 containerd[1755]: time="2025-01-29T16:11:53.271869240Z" level=info msg="Stop container \"d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794\" with signal terminated" Jan 29 16:11:53.281019 systemd-networkd[1516]: lxc_health: Link DOWN Jan 29 16:11:53.281026 systemd-networkd[1516]: lxc_health: Lost carrier Jan 29 16:11:53.291672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2-rootfs.mount: Deactivated successfully. Jan 29 16:11:53.296625 systemd[1]: cri-containerd-d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794.scope: Deactivated successfully. Jan 29 16:11:53.297177 systemd[1]: cri-containerd-d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794.scope: Consumed 6.307s CPU time, 123.1M memory peak, 144K read from disk, 12.9M written to disk. Jan 29 16:11:53.317872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794-rootfs.mount: Deactivated successfully. Jan 29 16:11:53.361037 containerd[1755]: time="2025-01-29T16:11:53.360892651Z" level=info msg="shim disconnected" id=d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794 namespace=k8s.io Jan 29 16:11:53.361304 containerd[1755]: time="2025-01-29T16:11:53.361113531Z" level=warning msg="cleaning up after shim disconnected" id=d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794 namespace=k8s.io Jan 29 16:11:53.361304 containerd[1755]: time="2025-01-29T16:11:53.361126851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:11:53.361304 containerd[1755]: time="2025-01-29T16:11:53.360972371Z" level=info msg="shim disconnected" id=ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2 namespace=k8s.io Jan 29 16:11:53.361304 containerd[1755]: time="2025-01-29T16:11:53.361205171Z" level=warning msg="cleaning up after shim disconnected" id=ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2 namespace=k8s.io Jan 29 16:11:53.361304 containerd[1755]: time="2025-01-29T16:11:53.361213531Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:11:53.374248 containerd[1755]: time="2025-01-29T16:11:53.374124985Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:11:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:11:53.380772 containerd[1755]: time="2025-01-29T16:11:53.380583751Z" level=info msg="StopContainer for \"d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794\" returns successfully" Jan 29 16:11:53.381424 containerd[1755]: time="2025-01-29T16:11:53.381212072Z" level=info msg="StopPodSandbox for \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\"" Jan 29 16:11:53.381424 containerd[1755]: time="2025-01-29T16:11:53.381248992Z" level=info msg="Container to stop \"f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:11:53.381424 containerd[1755]: time="2025-01-29T16:11:53.381266472Z" level=info msg="Container to stop \"d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:11:53.381424 containerd[1755]: time="2025-01-29T16:11:53.381279152Z" level=info msg="Container to stop \"ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:11:53.381424 containerd[1755]: time="2025-01-29T16:11:53.381287352Z" level=info msg="Container to stop \"28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:11:53.381424 containerd[1755]: time="2025-01-29T16:11:53.381295112Z" level=info msg="Container to stop \"70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:11:53.383419 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136-shm.mount: Deactivated successfully. Jan 29 16:11:53.384572 containerd[1755]: time="2025-01-29T16:11:53.384346835Z" level=info msg="StopContainer for \"ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2\" returns successfully" Jan 29 16:11:53.385823 containerd[1755]: time="2025-01-29T16:11:53.385787436Z" level=info msg="StopPodSandbox for \"6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a\"" Jan 29 16:11:53.386156 containerd[1755]: time="2025-01-29T16:11:53.385822157Z" level=info msg="Container to stop \"ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:11:53.387873 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a-shm.mount: Deactivated successfully. Jan 29 16:11:53.399506 systemd[1]: cri-containerd-f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136.scope: Deactivated successfully. Jan 29 16:11:53.411486 systemd[1]: cri-containerd-6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a.scope: Deactivated successfully. Jan 29 16:11:53.435232 containerd[1755]: time="2025-01-29T16:11:53.435001287Z" level=info msg="shim disconnected" id=6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a namespace=k8s.io Jan 29 16:11:53.435232 containerd[1755]: time="2025-01-29T16:11:53.435056207Z" level=warning msg="cleaning up after shim disconnected" id=6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a namespace=k8s.io Jan 29 16:11:53.435232 containerd[1755]: time="2025-01-29T16:11:53.435066847Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:11:53.436096 containerd[1755]: time="2025-01-29T16:11:53.435839808Z" level=info msg="shim disconnected" id=f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136 namespace=k8s.io Jan 29 16:11:53.436096 containerd[1755]: time="2025-01-29T16:11:53.435887688Z" level=warning msg="cleaning up after shim disconnected" id=f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136 namespace=k8s.io Jan 29 16:11:53.436096 containerd[1755]: time="2025-01-29T16:11:53.435895488Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:11:53.449279 containerd[1755]: time="2025-01-29T16:11:53.448729141Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:11:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:11:53.449279 containerd[1755]: time="2025-01-29T16:11:53.449133461Z" level=info msg="TearDown network for sandbox \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\" successfully" Jan 29 16:11:53.449279 containerd[1755]: time="2025-01-29T16:11:53.449151901Z" level=info msg="StopPodSandbox for \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\" returns successfully" Jan 29 16:11:53.450254 containerd[1755]: time="2025-01-29T16:11:53.450217822Z" level=info msg="TearDown network for sandbox \"6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a\" successfully" Jan 29 16:11:53.450371 containerd[1755]: time="2025-01-29T16:11:53.450355182Z" level=info msg="StopPodSandbox for \"6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a\" returns successfully" Jan 29 16:11:53.605791 kubelet[3498]: I0129 16:11:53.605210 3498 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df400005-747d-4fa7-a2e0-7be1b35f4388-cilium-config-path\") pod \"df400005-747d-4fa7-a2e0-7be1b35f4388\" (UID: \"df400005-747d-4fa7-a2e0-7be1b35f4388\") " Jan 29 16:11:53.605791 kubelet[3498]: I0129 16:11:53.605254 3498 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-host-proc-sys-net\") pod \"775449fb-d9b9-45ca-b745-d4770c3cbb45\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " Jan 29 16:11:53.605791 kubelet[3498]: I0129 16:11:53.605270 3498 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-cilium-run\") pod \"775449fb-d9b9-45ca-b745-d4770c3cbb45\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " Jan 29 16:11:53.605791 kubelet[3498]: I0129 16:11:53.605295 3498 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnzsk\" (UniqueName: \"kubernetes.io/projected/775449fb-d9b9-45ca-b745-d4770c3cbb45-kube-api-access-vnzsk\") pod \"775449fb-d9b9-45ca-b745-d4770c3cbb45\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " Jan 29 16:11:53.605791 kubelet[3498]: I0129 16:11:53.605310 3498 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/775449fb-d9b9-45ca-b745-d4770c3cbb45-hubble-tls\") pod \"775449fb-d9b9-45ca-b745-d4770c3cbb45\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " Jan 29 16:11:53.605791 kubelet[3498]: I0129 16:11:53.605325 3498 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-cilium-cgroup\") pod \"775449fb-d9b9-45ca-b745-d4770c3cbb45\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " Jan 29 16:11:53.606320 kubelet[3498]: I0129 16:11:53.605338 3498 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-cni-path\") pod \"775449fb-d9b9-45ca-b745-d4770c3cbb45\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " Jan 29 16:11:53.606320 kubelet[3498]: I0129 16:11:53.605355 3498 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/775449fb-d9b9-45ca-b745-d4770c3cbb45-clustermesh-secrets\") pod \"775449fb-d9b9-45ca-b745-d4770c3cbb45\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " Jan 29 16:11:53.606320 kubelet[3498]: I0129 16:11:53.605371 3498 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-hostproc\") pod \"775449fb-d9b9-45ca-b745-d4770c3cbb45\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " Jan 29 16:11:53.606320 kubelet[3498]: I0129 16:11:53.605386 3498 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-xtables-lock\") pod \"775449fb-d9b9-45ca-b745-d4770c3cbb45\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " Jan 29 16:11:53.606320 kubelet[3498]: I0129 16:11:53.605400 3498 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-lib-modules\") pod \"775449fb-d9b9-45ca-b745-d4770c3cbb45\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " Jan 29 16:11:53.606320 kubelet[3498]: I0129 16:11:53.605417 3498 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-etc-cni-netd\") pod \"775449fb-d9b9-45ca-b745-d4770c3cbb45\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " Jan 29 16:11:53.606446 kubelet[3498]: I0129 16:11:53.605433 3498 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/775449fb-d9b9-45ca-b745-d4770c3cbb45-cilium-config-path\") pod \"775449fb-d9b9-45ca-b745-d4770c3cbb45\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " Jan 29 16:11:53.606446 kubelet[3498]: I0129 16:11:53.605447 3498 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-bpf-maps\") pod \"775449fb-d9b9-45ca-b745-d4770c3cbb45\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " Jan 29 16:11:53.606446 kubelet[3498]: I0129 16:11:53.605462 3498 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-host-proc-sys-kernel\") pod \"775449fb-d9b9-45ca-b745-d4770c3cbb45\" (UID: \"775449fb-d9b9-45ca-b745-d4770c3cbb45\") " Jan 29 16:11:53.606446 kubelet[3498]: I0129 16:11:53.605478 3498 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc2vk\" (UniqueName: \"kubernetes.io/projected/df400005-747d-4fa7-a2e0-7be1b35f4388-kube-api-access-kc2vk\") pod \"df400005-747d-4fa7-a2e0-7be1b35f4388\" (UID: \"df400005-747d-4fa7-a2e0-7be1b35f4388\") " Jan 29 16:11:53.607418 kubelet[3498]: I0129 16:11:53.607205 3498 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df400005-747d-4fa7-a2e0-7be1b35f4388-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "df400005-747d-4fa7-a2e0-7be1b35f4388" (UID: "df400005-747d-4fa7-a2e0-7be1b35f4388"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:11:53.607659 kubelet[3498]: I0129 16:11:53.607534 3498 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-hostproc" (OuterVolumeSpecName: "hostproc") pod "775449fb-d9b9-45ca-b745-d4770c3cbb45" (UID: "775449fb-d9b9-45ca-b745-d4770c3cbb45"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:53.607745 kubelet[3498]: I0129 16:11:53.607730 3498 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "775449fb-d9b9-45ca-b745-d4770c3cbb45" (UID: "775449fb-d9b9-45ca-b745-d4770c3cbb45"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:53.608766 kubelet[3498]: I0129 16:11:53.607952 3498 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "775449fb-d9b9-45ca-b745-d4770c3cbb45" (UID: "775449fb-d9b9-45ca-b745-d4770c3cbb45"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:53.608766 kubelet[3498]: I0129 16:11:53.607970 3498 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "775449fb-d9b9-45ca-b745-d4770c3cbb45" (UID: "775449fb-d9b9-45ca-b745-d4770c3cbb45"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:53.608766 kubelet[3498]: I0129 16:11:53.608363 3498 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "775449fb-d9b9-45ca-b745-d4770c3cbb45" (UID: "775449fb-d9b9-45ca-b745-d4770c3cbb45"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:53.608766 kubelet[3498]: I0129 16:11:53.608543 3498 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "775449fb-d9b9-45ca-b745-d4770c3cbb45" (UID: "775449fb-d9b9-45ca-b745-d4770c3cbb45"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:53.609141 kubelet[3498]: I0129 16:11:53.609042 3498 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "775449fb-d9b9-45ca-b745-d4770c3cbb45" (UID: "775449fb-d9b9-45ca-b745-d4770c3cbb45"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:53.609141 kubelet[3498]: I0129 16:11:53.609073 3498 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "775449fb-d9b9-45ca-b745-d4770c3cbb45" (UID: "775449fb-d9b9-45ca-b745-d4770c3cbb45"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:53.612285 kubelet[3498]: I0129 16:11:53.612254 3498 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "775449fb-d9b9-45ca-b745-d4770c3cbb45" (UID: "775449fb-d9b9-45ca-b745-d4770c3cbb45"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:53.612450 kubelet[3498]: I0129 16:11:53.612292 3498 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-cni-path" (OuterVolumeSpecName: "cni-path") pod "775449fb-d9b9-45ca-b745-d4770c3cbb45" (UID: "775449fb-d9b9-45ca-b745-d4770c3cbb45"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:53.612594 kubelet[3498]: I0129 16:11:53.612571 3498 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/775449fb-d9b9-45ca-b745-d4770c3cbb45-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "775449fb-d9b9-45ca-b745-d4770c3cbb45" (UID: "775449fb-d9b9-45ca-b745-d4770c3cbb45"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:11:53.612670 kubelet[3498]: I0129 16:11:53.612646 3498 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df400005-747d-4fa7-a2e0-7be1b35f4388-kube-api-access-kc2vk" (OuterVolumeSpecName: "kube-api-access-kc2vk") pod "df400005-747d-4fa7-a2e0-7be1b35f4388" (UID: "df400005-747d-4fa7-a2e0-7be1b35f4388"). InnerVolumeSpecName "kube-api-access-kc2vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:11:53.613186 kubelet[3498]: I0129 16:11:53.613156 3498 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/775449fb-d9b9-45ca-b745-d4770c3cbb45-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "775449fb-d9b9-45ca-b745-d4770c3cbb45" (UID: "775449fb-d9b9-45ca-b745-d4770c3cbb45"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:11:53.613950 kubelet[3498]: I0129 16:11:53.613694 3498 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/775449fb-d9b9-45ca-b745-d4770c3cbb45-kube-api-access-vnzsk" (OuterVolumeSpecName: "kube-api-access-vnzsk") pod "775449fb-d9b9-45ca-b745-d4770c3cbb45" (UID: "775449fb-d9b9-45ca-b745-d4770c3cbb45"). InnerVolumeSpecName "kube-api-access-vnzsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:11:53.614022 kubelet[3498]: I0129 16:11:53.614005 3498 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775449fb-d9b9-45ca-b745-d4770c3cbb45-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "775449fb-d9b9-45ca-b745-d4770c3cbb45" (UID: "775449fb-d9b9-45ca-b745-d4770c3cbb45"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:11:53.706283 kubelet[3498]: I0129 16:11:53.706248 3498 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-cilium-run\") on node \"ci-4230.0.0-a-732fe1e27c\" DevicePath \"\"" Jan 29 16:11:53.706476 kubelet[3498]: I0129 16:11:53.706463 3498 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vnzsk\" (UniqueName: \"kubernetes.io/projected/775449fb-d9b9-45ca-b745-d4770c3cbb45-kube-api-access-vnzsk\") on node \"ci-4230.0.0-a-732fe1e27c\" DevicePath \"\"" Jan 29 16:11:53.706561 kubelet[3498]: I0129 16:11:53.706550 3498 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/775449fb-d9b9-45ca-b745-d4770c3cbb45-hubble-tls\") on node \"ci-4230.0.0-a-732fe1e27c\" DevicePath \"\"" Jan 29 16:11:53.706696 kubelet[3498]: I0129 16:11:53.706620 3498 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-cilium-cgroup\") on node \"ci-4230.0.0-a-732fe1e27c\" DevicePath \"\"" Jan 29 16:11:53.706696 kubelet[3498]: I0129 16:11:53.706635 3498 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/775449fb-d9b9-45ca-b745-d4770c3cbb45-clustermesh-secrets\") on node \"ci-4230.0.0-a-732fe1e27c\" DevicePath \"\"" Jan 29 16:11:53.706696 kubelet[3498]: I0129 16:11:53.706643 3498 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-hostproc\") on node \"ci-4230.0.0-a-732fe1e27c\" DevicePath \"\"" Jan 29 16:11:53.706696 kubelet[3498]: I0129 16:11:53.706654 3498 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-xtables-lock\") on node \"ci-4230.0.0-a-732fe1e27c\" DevicePath \"\"" Jan 29 16:11:53.706696 kubelet[3498]: I0129 16:11:53.706664 3498 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-lib-modules\") on node \"ci-4230.0.0-a-732fe1e27c\" DevicePath \"\"" Jan 29 16:11:53.706696 kubelet[3498]: I0129 16:11:53.706673 3498 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-cni-path\") on node \"ci-4230.0.0-a-732fe1e27c\" DevicePath \"\"" Jan 29 16:11:53.706696 kubelet[3498]: I0129 16:11:53.706681 3498 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-etc-cni-netd\") on node \"ci-4230.0.0-a-732fe1e27c\" DevicePath \"\"" Jan 29 16:11:53.706935 kubelet[3498]: I0129 16:11:53.706856 3498 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/775449fb-d9b9-45ca-b745-d4770c3cbb45-cilium-config-path\") on node \"ci-4230.0.0-a-732fe1e27c\" DevicePath \"\"" Jan 29 16:11:53.706935 kubelet[3498]: I0129 16:11:53.706871 3498 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-bpf-maps\") on node \"ci-4230.0.0-a-732fe1e27c\" DevicePath \"\"" Jan 29 16:11:53.706935 kubelet[3498]: I0129 16:11:53.706880 3498 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-host-proc-sys-kernel\") on node \"ci-4230.0.0-a-732fe1e27c\" DevicePath \"\"" Jan 29 16:11:53.706935 kubelet[3498]: I0129 16:11:53.706887 3498 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kc2vk\" (UniqueName: \"kubernetes.io/projected/df400005-747d-4fa7-a2e0-7be1b35f4388-kube-api-access-kc2vk\") on node \"ci-4230.0.0-a-732fe1e27c\" DevicePath \"\"" Jan 29 16:11:53.706935 kubelet[3498]: I0129 16:11:53.706897 3498 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/775449fb-d9b9-45ca-b745-d4770c3cbb45-host-proc-sys-net\") on node \"ci-4230.0.0-a-732fe1e27c\" DevicePath \"\"" Jan 29 16:11:53.706935 kubelet[3498]: I0129 16:11:53.706906 3498 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df400005-747d-4fa7-a2e0-7be1b35f4388-cilium-config-path\") on node \"ci-4230.0.0-a-732fe1e27c\" DevicePath \"\"" Jan 29 16:11:53.738371 kubelet[3498]: I0129 16:11:53.737840 3498 scope.go:117] "RemoveContainer" containerID="ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2" Jan 29 16:11:53.740759 containerd[1755]: time="2025-01-29T16:11:53.740264558Z" level=info msg="RemoveContainer for \"ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2\"" Jan 29 16:11:53.744834 systemd[1]: Removed slice kubepods-besteffort-poddf400005_747d_4fa7_a2e0_7be1b35f4388.slice - libcontainer container kubepods-besteffort-poddf400005_747d_4fa7_a2e0_7be1b35f4388.slice. Jan 29 16:11:53.750746 systemd[1]: Removed slice kubepods-burstable-pod775449fb_d9b9_45ca_b745_d4770c3cbb45.slice - libcontainer container kubepods-burstable-pod775449fb_d9b9_45ca_b745_d4770c3cbb45.slice. Jan 29 16:11:53.750953 systemd[1]: kubepods-burstable-pod775449fb_d9b9_45ca_b745_d4770c3cbb45.slice: Consumed 6.383s CPU time, 123.5M memory peak, 144K read from disk, 12.9M written to disk. Jan 29 16:11:53.757425 containerd[1755]: time="2025-01-29T16:11:53.757378336Z" level=info msg="RemoveContainer for \"ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2\" returns successfully" Jan 29 16:11:53.757805 kubelet[3498]: I0129 16:11:53.757702 3498 scope.go:117] "RemoveContainer" containerID="ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2" Jan 29 16:11:53.758599 containerd[1755]: time="2025-01-29T16:11:53.758517297Z" level=error msg="ContainerStatus for \"ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2\": not found" Jan 29 16:11:53.758703 kubelet[3498]: E0129 16:11:53.758670 3498 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2\": not found" containerID="ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2" Jan 29 16:11:53.758915 kubelet[3498]: I0129 16:11:53.758697 3498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2"} err="failed to get container status \"ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff1833fbbcd872ab336d59ed8511185c1537bc8a4998eabc80a8af1f1ec567d2\": not found" Jan 29 16:11:53.758915 kubelet[3498]: I0129 16:11:53.758905 3498 scope.go:117] "RemoveContainer" containerID="d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794" Jan 29 16:11:53.761162 containerd[1755]: time="2025-01-29T16:11:53.760892859Z" level=info msg="RemoveContainer for \"d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794\"" Jan 29 16:11:53.773500 containerd[1755]: time="2025-01-29T16:11:53.773433232Z" level=info msg="RemoveContainer for \"d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794\" returns successfully" Jan 29 16:11:53.773706 kubelet[3498]: I0129 16:11:53.773646 3498 scope.go:117] "RemoveContainer" containerID="70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556" Jan 29 16:11:53.774739 containerd[1755]: time="2025-01-29T16:11:53.774709593Z" level=info msg="RemoveContainer for \"70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556\"" Jan 29 16:11:53.782153 containerd[1755]: time="2025-01-29T16:11:53.782115921Z" level=info msg="RemoveContainer for \"70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556\" returns successfully" Jan 29 16:11:53.782706 kubelet[3498]: I0129 16:11:53.782385 3498 scope.go:117] "RemoveContainer" containerID="28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec" Jan 29 16:11:53.783378 containerd[1755]: time="2025-01-29T16:11:53.783356002Z" level=info msg="RemoveContainer for \"28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec\"" Jan 29 16:11:53.793023 containerd[1755]: time="2025-01-29T16:11:53.792950972Z" level=info msg="RemoveContainer for \"28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec\" returns successfully" Jan 29 16:11:53.793260 kubelet[3498]: I0129 16:11:53.793228 3498 scope.go:117] "RemoveContainer" containerID="f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03" Jan 29 16:11:53.794292 containerd[1755]: time="2025-01-29T16:11:53.794263133Z" level=info msg="RemoveContainer for \"f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03\"" Jan 29 16:11:53.802766 containerd[1755]: time="2025-01-29T16:11:53.802646742Z" level=info msg="RemoveContainer for \"f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03\" returns successfully" Jan 29 16:11:53.802968 kubelet[3498]: I0129 16:11:53.802948 3498 scope.go:117] "RemoveContainer" containerID="ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466" Jan 29 16:11:53.804110 containerd[1755]: time="2025-01-29T16:11:53.804002383Z" level=info msg="RemoveContainer for \"ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466\"" Jan 29 16:11:53.811893 containerd[1755]: time="2025-01-29T16:11:53.811818631Z" level=info msg="RemoveContainer for \"ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466\" returns successfully" Jan 29 16:11:53.812145 kubelet[3498]: I0129 16:11:53.812122 3498 scope.go:117] "RemoveContainer" containerID="d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794" Jan 29 16:11:53.812505 containerd[1755]: time="2025-01-29T16:11:53.812387672Z" level=error msg="ContainerStatus for \"d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794\": not found" Jan 29 16:11:53.812641 kubelet[3498]: E0129 16:11:53.812524 3498 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794\": not found" containerID="d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794" Jan 29 16:11:53.812641 kubelet[3498]: I0129 16:11:53.812545 3498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794"} err="failed to get container status \"d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7cb74f65eca84c4493b34d13687e43594d51fd674438dd9bf19ed3a2dc85794\": not found" Jan 29 16:11:53.812641 kubelet[3498]: I0129 16:11:53.812566 3498 scope.go:117] "RemoveContainer" containerID="70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556" Jan 29 16:11:53.813131 containerd[1755]: time="2025-01-29T16:11:53.812955992Z" level=error msg="ContainerStatus for \"70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556\": not found" Jan 29 16:11:53.813374 kubelet[3498]: E0129 16:11:53.813246 3498 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556\": not found" containerID="70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556" Jan 29 16:11:53.813374 kubelet[3498]: I0129 16:11:53.813275 3498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556"} err="failed to get container status \"70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556\": rpc error: code = NotFound desc = an error occurred when try to find container \"70c03be0de03101f378218f01d20140ccd8c10721e1362486db483acadb76556\": not found" Jan 29 16:11:53.813374 kubelet[3498]: I0129 16:11:53.813291 3498 scope.go:117] "RemoveContainer" containerID="28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec" Jan 29 16:11:53.813666 containerd[1755]: time="2025-01-29T16:11:53.813590073Z" level=error msg="ContainerStatus for \"28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec\": not found" Jan 29 16:11:53.813716 kubelet[3498]: E0129 16:11:53.813701 3498 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec\": not found" containerID="28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec" Jan 29 16:11:53.813781 kubelet[3498]: I0129 16:11:53.813721 3498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec"} err="failed to get container status \"28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"28cc49e147cc3103404901f6c37153063d4612c8697ae9ab5e0cdae03ecc97ec\": not found" Jan 29 16:11:53.813781 kubelet[3498]: I0129 16:11:53.813740 3498 scope.go:117] "RemoveContainer" containerID="f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03" Jan 29 16:11:53.814108 kubelet[3498]: E0129 16:11:53.814028 3498 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03\": not found" containerID="f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03" Jan 29 16:11:53.814108 kubelet[3498]: I0129 16:11:53.814043 3498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03"} err="failed to get container status \"f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03\": not found" Jan 29 16:11:53.814108 kubelet[3498]: I0129 16:11:53.814056 3498 scope.go:117] "RemoveContainer" containerID="ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466" Jan 29 16:11:53.814269 containerd[1755]: time="2025-01-29T16:11:53.813932193Z" level=error msg="ContainerStatus for \"f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7aa96eba23f3be1a296452ee51936328df117dbf89138fac8c418b77add4b03\": not found" Jan 29 16:11:53.814418 containerd[1755]: time="2025-01-29T16:11:53.814358034Z" level=error msg="ContainerStatus for \"ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466\": not found" Jan 29 16:11:53.814559 kubelet[3498]: E0129 16:11:53.814512 3498 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466\": not found" containerID="ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466" Jan 29 16:11:53.814599 kubelet[3498]: I0129 16:11:53.814563 3498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466"} err="failed to get container status \"ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae2a74cb3149872bc87967563d50b028c61fab6ebd202c27dec2f8e1115d4466\": not found" Jan 29 16:11:53.901338 kubelet[3498]: E0129 16:11:53.901182 3498 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:11:54.247450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a-rootfs.mount: Deactivated successfully. Jan 29 16:11:54.247545 systemd[1]: var-lib-kubelet-pods-df400005\x2d747d\x2d4fa7\x2da2e0\x2d7be1b35f4388-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkc2vk.mount: Deactivated successfully. Jan 29 16:11:54.247603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136-rootfs.mount: Deactivated successfully. Jan 29 16:11:54.247648 systemd[1]: var-lib-kubelet-pods-775449fb\x2dd9b9\x2d45ca\x2db745\x2dd4770c3cbb45-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvnzsk.mount: Deactivated successfully. Jan 29 16:11:54.247695 systemd[1]: var-lib-kubelet-pods-775449fb\x2dd9b9\x2d45ca\x2db745\x2dd4770c3cbb45-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 16:11:54.247744 systemd[1]: var-lib-kubelet-pods-775449fb\x2dd9b9\x2d45ca\x2db745\x2dd4770c3cbb45-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 16:11:55.249920 sshd[5087]: Connection closed by 10.200.16.10 port 51228 Jan 29 16:11:55.250306 sshd-session[5085]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:55.254458 systemd[1]: sshd@21-10.200.20.10:22-10.200.16.10:51228.service: Deactivated successfully. Jan 29 16:11:55.256248 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 16:11:55.256437 systemd[1]: session-24.scope: Consumed 1.429s CPU time, 23.5M memory peak. Jan 29 16:11:55.257452 systemd-logind[1731]: Session 24 logged out. Waiting for processes to exit. Jan 29 16:11:55.258617 systemd-logind[1731]: Removed session 24. Jan 29 16:11:55.309335 kubelet[3498]: E0129 16:11:55.309022 3498 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-g6q5d" podUID="aded3c45-b21b-45f0-bfee-29eb73a91b20" Jan 29 16:11:55.312180 kubelet[3498]: I0129 16:11:55.312115 3498 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="775449fb-d9b9-45ca-b745-d4770c3cbb45" path="/var/lib/kubelet/pods/775449fb-d9b9-45ca-b745-d4770c3cbb45/volumes" Jan 29 16:11:55.312775 kubelet[3498]: I0129 16:11:55.312748 3498 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df400005-747d-4fa7-a2e0-7be1b35f4388" path="/var/lib/kubelet/pods/df400005-747d-4fa7-a2e0-7be1b35f4388/volumes" Jan 29 16:11:55.327943 systemd[1]: Started sshd@22-10.200.20.10:22-10.200.16.10:51234.service - OpenSSH per-connection server daemon (10.200.16.10:51234). Jan 29 16:11:55.757552 sshd[5247]: Accepted publickey for core from 10.200.16.10 port 51234 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:55.758846 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:55.762898 systemd-logind[1731]: New session 25 of user core. Jan 29 16:11:55.768256 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 16:11:56.723215 kubelet[3498]: I0129 16:11:56.722314 3498 topology_manager.go:215] "Topology Admit Handler" podUID="125b44ac-2fdc-4dc5-9e0c-e281242eaa9a" podNamespace="kube-system" podName="cilium-m7vw4" Jan 29 16:11:56.723215 kubelet[3498]: E0129 16:11:56.722372 3498 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="775449fb-d9b9-45ca-b745-d4770c3cbb45" containerName="mount-cgroup" Jan 29 16:11:56.723215 kubelet[3498]: E0129 16:11:56.722381 3498 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="775449fb-d9b9-45ca-b745-d4770c3cbb45" containerName="apply-sysctl-overwrites" Jan 29 16:11:56.723215 kubelet[3498]: E0129 16:11:56.722388 3498 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="775449fb-d9b9-45ca-b745-d4770c3cbb45" containerName="cilium-agent" Jan 29 16:11:56.723215 kubelet[3498]: E0129 16:11:56.722394 3498 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="df400005-747d-4fa7-a2e0-7be1b35f4388" containerName="cilium-operator" Jan 29 16:11:56.723215 kubelet[3498]: E0129 16:11:56.722399 3498 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="775449fb-d9b9-45ca-b745-d4770c3cbb45" containerName="clean-cilium-state" Jan 29 16:11:56.723215 kubelet[3498]: E0129 16:11:56.722406 3498 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="775449fb-d9b9-45ca-b745-d4770c3cbb45" containerName="mount-bpf-fs" Jan 29 16:11:56.723215 kubelet[3498]: I0129 16:11:56.722427 3498 memory_manager.go:354] "RemoveStaleState removing state" podUID="775449fb-d9b9-45ca-b745-d4770c3cbb45" containerName="cilium-agent" Jan 29 16:11:56.723215 kubelet[3498]: I0129 16:11:56.722432 3498 memory_manager.go:354] "RemoveStaleState removing state" podUID="df400005-747d-4fa7-a2e0-7be1b35f4388" containerName="cilium-operator" Jan 29 16:11:56.732442 systemd[1]: Created slice kubepods-burstable-pod125b44ac_2fdc_4dc5_9e0c_e281242eaa9a.slice - libcontainer container kubepods-burstable-pod125b44ac_2fdc_4dc5_9e0c_e281242eaa9a.slice. Jan 29 16:11:56.768186 sshd[5249]: Connection closed by 10.200.16.10 port 51234 Jan 29 16:11:56.770552 sshd-session[5247]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:56.777776 systemd-logind[1731]: Session 25 logged out. Waiting for processes to exit. Jan 29 16:11:56.777971 systemd[1]: sshd@22-10.200.20.10:22-10.200.16.10:51234.service: Deactivated successfully. Jan 29 16:11:56.781433 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 16:11:56.785132 systemd-logind[1731]: Removed session 25. Jan 29 16:11:56.822099 kubelet[3498]: I0129 16:11:56.822045 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/125b44ac-2fdc-4dc5-9e0c-e281242eaa9a-hostproc\") pod \"cilium-m7vw4\" (UID: \"125b44ac-2fdc-4dc5-9e0c-e281242eaa9a\") " pod="kube-system/cilium-m7vw4" Jan 29 16:11:56.822099 kubelet[3498]: I0129 16:11:56.822103 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/125b44ac-2fdc-4dc5-9e0c-e281242eaa9a-etc-cni-netd\") pod \"cilium-m7vw4\" (UID: \"125b44ac-2fdc-4dc5-9e0c-e281242eaa9a\") " pod="kube-system/cilium-m7vw4" Jan 29 16:11:56.822271 kubelet[3498]: I0129 16:11:56.822127 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/125b44ac-2fdc-4dc5-9e0c-e281242eaa9a-cilium-config-path\") pod \"cilium-m7vw4\" (UID: \"125b44ac-2fdc-4dc5-9e0c-e281242eaa9a\") " pod="kube-system/cilium-m7vw4" Jan 29 16:11:56.822271 kubelet[3498]: I0129 16:11:56.822144 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/125b44ac-2fdc-4dc5-9e0c-e281242eaa9a-host-proc-sys-kernel\") pod \"cilium-m7vw4\" (UID: \"125b44ac-2fdc-4dc5-9e0c-e281242eaa9a\") " pod="kube-system/cilium-m7vw4" Jan 29 16:11:56.822271 kubelet[3498]: I0129 16:11:56.822162 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/125b44ac-2fdc-4dc5-9e0c-e281242eaa9a-cni-path\") pod \"cilium-m7vw4\" (UID: \"125b44ac-2fdc-4dc5-9e0c-e281242eaa9a\") " pod="kube-system/cilium-m7vw4" Jan 29 16:11:56.822271 kubelet[3498]: I0129 16:11:56.822178 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/125b44ac-2fdc-4dc5-9e0c-e281242eaa9a-xtables-lock\") pod \"cilium-m7vw4\" (UID: \"125b44ac-2fdc-4dc5-9e0c-e281242eaa9a\") " pod="kube-system/cilium-m7vw4" Jan 29 16:11:56.822271 kubelet[3498]: I0129 16:11:56.822194 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/125b44ac-2fdc-4dc5-9e0c-e281242eaa9a-hubble-tls\") pod \"cilium-m7vw4\" (UID: \"125b44ac-2fdc-4dc5-9e0c-e281242eaa9a\") " pod="kube-system/cilium-m7vw4" Jan 29 16:11:56.822271 kubelet[3498]: I0129 16:11:56.822209 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/125b44ac-2fdc-4dc5-9e0c-e281242eaa9a-cilium-run\") pod \"cilium-m7vw4\" (UID: \"125b44ac-2fdc-4dc5-9e0c-e281242eaa9a\") " pod="kube-system/cilium-m7vw4" Jan 29 16:11:56.822396 kubelet[3498]: I0129 16:11:56.822224 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/125b44ac-2fdc-4dc5-9e0c-e281242eaa9a-lib-modules\") pod \"cilium-m7vw4\" (UID: \"125b44ac-2fdc-4dc5-9e0c-e281242eaa9a\") " pod="kube-system/cilium-m7vw4" Jan 29 16:11:56.822396 kubelet[3498]: I0129 16:11:56.822238 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k6gz\" (UniqueName: \"kubernetes.io/projected/125b44ac-2fdc-4dc5-9e0c-e281242eaa9a-kube-api-access-4k6gz\") pod \"cilium-m7vw4\" (UID: \"125b44ac-2fdc-4dc5-9e0c-e281242eaa9a\") " pod="kube-system/cilium-m7vw4" Jan 29 16:11:56.822396 kubelet[3498]: I0129 16:11:56.822254 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/125b44ac-2fdc-4dc5-9e0c-e281242eaa9a-bpf-maps\") pod \"cilium-m7vw4\" (UID: \"125b44ac-2fdc-4dc5-9e0c-e281242eaa9a\") " pod="kube-system/cilium-m7vw4" Jan 29 16:11:56.822396 kubelet[3498]: I0129 16:11:56.822271 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/125b44ac-2fdc-4dc5-9e0c-e281242eaa9a-cilium-cgroup\") pod \"cilium-m7vw4\" (UID: \"125b44ac-2fdc-4dc5-9e0c-e281242eaa9a\") " pod="kube-system/cilium-m7vw4" Jan 29 16:11:56.822396 kubelet[3498]: I0129 16:11:56.822286 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/125b44ac-2fdc-4dc5-9e0c-e281242eaa9a-clustermesh-secrets\") pod \"cilium-m7vw4\" (UID: \"125b44ac-2fdc-4dc5-9e0c-e281242eaa9a\") " pod="kube-system/cilium-m7vw4" Jan 29 16:11:56.822396 kubelet[3498]: I0129 16:11:56.822302 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/125b44ac-2fdc-4dc5-9e0c-e281242eaa9a-cilium-ipsec-secrets\") pod \"cilium-m7vw4\" (UID: \"125b44ac-2fdc-4dc5-9e0c-e281242eaa9a\") " pod="kube-system/cilium-m7vw4" Jan 29 16:11:56.822512 kubelet[3498]: I0129 16:11:56.822317 3498 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/125b44ac-2fdc-4dc5-9e0c-e281242eaa9a-host-proc-sys-net\") pod \"cilium-m7vw4\" (UID: \"125b44ac-2fdc-4dc5-9e0c-e281242eaa9a\") " pod="kube-system/cilium-m7vw4" Jan 29 16:11:56.843738 systemd[1]: Started sshd@23-10.200.20.10:22-10.200.16.10:40206.service - OpenSSH per-connection server daemon (10.200.16.10:40206). Jan 29 16:11:57.036549 containerd[1755]: time="2025-01-29T16:11:57.035870882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m7vw4,Uid:125b44ac-2fdc-4dc5-9e0c-e281242eaa9a,Namespace:kube-system,Attempt:0,}" Jan 29 16:11:57.077474 containerd[1755]: time="2025-01-29T16:11:57.077343363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:11:57.077474 containerd[1755]: time="2025-01-29T16:11:57.077396723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:11:57.077474 containerd[1755]: time="2025-01-29T16:11:57.077418003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:11:57.078306 containerd[1755]: time="2025-01-29T16:11:57.078101004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:11:57.093316 systemd[1]: Started cri-containerd-2506aa8ba92563beeeebfb896bf9390f6763db3de30c8acc835b3de10075707a.scope - libcontainer container 2506aa8ba92563beeeebfb896bf9390f6763db3de30c8acc835b3de10075707a. Jan 29 16:11:57.114070 containerd[1755]: time="2025-01-29T16:11:57.114013640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m7vw4,Uid:125b44ac-2fdc-4dc5-9e0c-e281242eaa9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2506aa8ba92563beeeebfb896bf9390f6763db3de30c8acc835b3de10075707a\"" Jan 29 16:11:57.116824 containerd[1755]: time="2025-01-29T16:11:57.116771803Z" level=info msg="CreateContainer within sandbox \"2506aa8ba92563beeeebfb896bf9390f6763db3de30c8acc835b3de10075707a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:11:57.152514 containerd[1755]: time="2025-01-29T16:11:57.152422598Z" level=info msg="CreateContainer within sandbox \"2506aa8ba92563beeeebfb896bf9390f6763db3de30c8acc835b3de10075707a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f913b479accc92c85e9c9ecef091a64226704d96aba7dfeef9ab21c11ceb0b57\"" Jan 29 16:11:57.153748 containerd[1755]: time="2025-01-29T16:11:57.152905399Z" level=info msg="StartContainer for \"f913b479accc92c85e9c9ecef091a64226704d96aba7dfeef9ab21c11ceb0b57\"" Jan 29 16:11:57.178248 systemd[1]: Started cri-containerd-f913b479accc92c85e9c9ecef091a64226704d96aba7dfeef9ab21c11ceb0b57.scope - libcontainer container f913b479accc92c85e9c9ecef091a64226704d96aba7dfeef9ab21c11ceb0b57. Jan 29 16:11:57.208423 containerd[1755]: time="2025-01-29T16:11:57.208384694Z" level=info msg="StartContainer for \"f913b479accc92c85e9c9ecef091a64226704d96aba7dfeef9ab21c11ceb0b57\" returns successfully" Jan 29 16:11:57.213713 systemd[1]: cri-containerd-f913b479accc92c85e9c9ecef091a64226704d96aba7dfeef9ab21c11ceb0b57.scope: Deactivated successfully. Jan 29 16:11:57.270675 sshd[5261]: Accepted publickey for core from 10.200.16.10 port 40206 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:57.272035 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:57.274955 containerd[1755]: time="2025-01-29T16:11:57.274705041Z" level=info msg="shim disconnected" id=f913b479accc92c85e9c9ecef091a64226704d96aba7dfeef9ab21c11ceb0b57 namespace=k8s.io Jan 29 16:11:57.274955 containerd[1755]: time="2025-01-29T16:11:57.274770121Z" level=warning msg="cleaning up after shim disconnected" id=f913b479accc92c85e9c9ecef091a64226704d96aba7dfeef9ab21c11ceb0b57 namespace=k8s.io Jan 29 16:11:57.274955 containerd[1755]: time="2025-01-29T16:11:57.274778641Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:11:57.279312 systemd-logind[1731]: New session 26 of user core. Jan 29 16:11:57.283328 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 16:11:57.287253 containerd[1755]: time="2025-01-29T16:11:57.287163813Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:11:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:11:57.309028 kubelet[3498]: E0129 16:11:57.308966 3498 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-g6q5d" podUID="aded3c45-b21b-45f0-bfee-29eb73a91b20" Jan 29 16:11:57.611161 sshd[5368]: Connection closed by 10.200.16.10 port 40206 Jan 29 16:11:57.610582 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:57.614216 systemd-logind[1731]: Session 26 logged out. Waiting for processes to exit. Jan 29 16:11:57.614794 systemd[1]: sshd@23-10.200.20.10:22-10.200.16.10:40206.service: Deactivated successfully. Jan 29 16:11:57.616867 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 16:11:57.617982 systemd-logind[1731]: Removed session 26. Jan 29 16:11:57.699330 systemd[1]: Started sshd@24-10.200.20.10:22-10.200.16.10:40208.service - OpenSSH per-connection server daemon (10.200.16.10:40208). Jan 29 16:11:57.757439 containerd[1755]: time="2025-01-29T16:11:57.757392604Z" level=info msg="CreateContainer within sandbox \"2506aa8ba92563beeeebfb896bf9390f6763db3de30c8acc835b3de10075707a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:11:57.788268 containerd[1755]: time="2025-01-29T16:11:57.788218955Z" level=info msg="CreateContainer within sandbox \"2506aa8ba92563beeeebfb896bf9390f6763db3de30c8acc835b3de10075707a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"930da1cec1641a5abc08daf9e030cdcd45f8398e214fd6309691f548124715fd\"" Jan 29 16:11:57.789837 containerd[1755]: time="2025-01-29T16:11:57.788980755Z" level=info msg="StartContainer for \"930da1cec1641a5abc08daf9e030cdcd45f8398e214fd6309691f548124715fd\"" Jan 29 16:11:57.811349 systemd[1]: Started cri-containerd-930da1cec1641a5abc08daf9e030cdcd45f8398e214fd6309691f548124715fd.scope - libcontainer container 930da1cec1641a5abc08daf9e030cdcd45f8398e214fd6309691f548124715fd. Jan 29 16:11:57.840110 containerd[1755]: time="2025-01-29T16:11:57.840001966Z" level=info msg="StartContainer for \"930da1cec1641a5abc08daf9e030cdcd45f8398e214fd6309691f548124715fd\" returns successfully" Jan 29 16:11:57.844370 systemd[1]: cri-containerd-930da1cec1641a5abc08daf9e030cdcd45f8398e214fd6309691f548124715fd.scope: Deactivated successfully. Jan 29 16:11:57.882992 containerd[1755]: time="2025-01-29T16:11:57.882753129Z" level=info msg="shim disconnected" id=930da1cec1641a5abc08daf9e030cdcd45f8398e214fd6309691f548124715fd namespace=k8s.io Jan 29 16:11:57.882992 containerd[1755]: time="2025-01-29T16:11:57.882805849Z" level=warning msg="cleaning up after shim disconnected" id=930da1cec1641a5abc08daf9e030cdcd45f8398e214fd6309691f548124715fd namespace=k8s.io Jan 29 16:11:57.882992 containerd[1755]: time="2025-01-29T16:11:57.882814169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:11:58.127589 sshd[5375]: Accepted publickey for core from 10.200.16.10 port 40208 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:58.128904 sshd-session[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:58.133191 systemd-logind[1731]: New session 27 of user core. Jan 29 16:11:58.141241 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 16:11:58.167481 kubelet[3498]: I0129 16:11:58.166334 3498 setters.go:580] "Node became not ready" node="ci-4230.0.0-a-732fe1e27c" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T16:11:58Z","lastTransitionTime":"2025-01-29T16:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 16:11:58.760895 containerd[1755]: time="2025-01-29T16:11:58.760846608Z" level=info msg="CreateContainer within sandbox \"2506aa8ba92563beeeebfb896bf9390f6763db3de30c8acc835b3de10075707a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:11:58.805802 containerd[1755]: time="2025-01-29T16:11:58.805714773Z" level=info msg="CreateContainer within sandbox \"2506aa8ba92563beeeebfb896bf9390f6763db3de30c8acc835b3de10075707a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fe13874f1cd5f17ed02d28957d6cdff01b41be1149a98db74e28130cd3d13455\"" Jan 29 16:11:58.806550 containerd[1755]: time="2025-01-29T16:11:58.806462894Z" level=info msg="StartContainer for \"fe13874f1cd5f17ed02d28957d6cdff01b41be1149a98db74e28130cd3d13455\"" Jan 29 16:11:58.834255 systemd[1]: Started cri-containerd-fe13874f1cd5f17ed02d28957d6cdff01b41be1149a98db74e28130cd3d13455.scope - libcontainer container fe13874f1cd5f17ed02d28957d6cdff01b41be1149a98db74e28130cd3d13455. Jan 29 16:11:58.864751 systemd[1]: cri-containerd-fe13874f1cd5f17ed02d28957d6cdff01b41be1149a98db74e28130cd3d13455.scope: Deactivated successfully. Jan 29 16:11:58.868476 containerd[1755]: time="2025-01-29T16:11:58.868364675Z" level=info msg="StartContainer for \"fe13874f1cd5f17ed02d28957d6cdff01b41be1149a98db74e28130cd3d13455\" returns successfully" Jan 29 16:11:58.902144 kubelet[3498]: E0129 16:11:58.902071 3498 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:11:58.904045 containerd[1755]: time="2025-01-29T16:11:58.903857391Z" level=info msg="shim disconnected" id=fe13874f1cd5f17ed02d28957d6cdff01b41be1149a98db74e28130cd3d13455 namespace=k8s.io Jan 29 16:11:58.904045 containerd[1755]: time="2025-01-29T16:11:58.903903911Z" level=warning msg="cleaning up after shim disconnected" id=fe13874f1cd5f17ed02d28957d6cdff01b41be1149a98db74e28130cd3d13455 namespace=k8s.io Jan 29 16:11:58.904045 containerd[1755]: time="2025-01-29T16:11:58.903912511Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:11:58.928331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe13874f1cd5f17ed02d28957d6cdff01b41be1149a98db74e28130cd3d13455-rootfs.mount: Deactivated successfully. Jan 29 16:11:59.310404 kubelet[3498]: E0129 16:11:59.309316 3498 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-g6q5d" podUID="aded3c45-b21b-45f0-bfee-29eb73a91b20" Jan 29 16:11:59.764071 containerd[1755]: time="2025-01-29T16:11:59.763934972Z" level=info msg="CreateContainer within sandbox \"2506aa8ba92563beeeebfb896bf9390f6763db3de30c8acc835b3de10075707a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:11:59.801587 containerd[1755]: time="2025-01-29T16:11:59.801491009Z" level=info msg="CreateContainer within sandbox \"2506aa8ba92563beeeebfb896bf9390f6763db3de30c8acc835b3de10075707a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d032ba4aa2ad68814d328cc4d2a9bfd2989766f9c5d923789b3fa997b2879f4b\"" Jan 29 16:11:59.803374 containerd[1755]: time="2025-01-29T16:11:59.802276930Z" level=info msg="StartContainer for \"d032ba4aa2ad68814d328cc4d2a9bfd2989766f9c5d923789b3fa997b2879f4b\"" Jan 29 16:11:59.833256 systemd[1]: Started cri-containerd-d032ba4aa2ad68814d328cc4d2a9bfd2989766f9c5d923789b3fa997b2879f4b.scope - libcontainer container d032ba4aa2ad68814d328cc4d2a9bfd2989766f9c5d923789b3fa997b2879f4b. Jan 29 16:11:59.868616 systemd[1]: cri-containerd-d032ba4aa2ad68814d328cc4d2a9bfd2989766f9c5d923789b3fa997b2879f4b.scope: Deactivated successfully. Jan 29 16:11:59.870552 containerd[1755]: time="2025-01-29T16:11:59.870411558Z" level=info msg="StartContainer for \"d032ba4aa2ad68814d328cc4d2a9bfd2989766f9c5d923789b3fa997b2879f4b\" returns successfully" Jan 29 16:11:59.907266 containerd[1755]: time="2025-01-29T16:11:59.907020595Z" level=info msg="shim disconnected" id=d032ba4aa2ad68814d328cc4d2a9bfd2989766f9c5d923789b3fa997b2879f4b namespace=k8s.io Jan 29 16:11:59.907266 containerd[1755]: time="2025-01-29T16:11:59.907139635Z" level=warning msg="cleaning up after shim disconnected" id=d032ba4aa2ad68814d328cc4d2a9bfd2989766f9c5d923789b3fa997b2879f4b namespace=k8s.io Jan 29 16:11:59.907266 containerd[1755]: time="2025-01-29T16:11:59.907148115Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:11:59.928376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d032ba4aa2ad68814d328cc4d2a9bfd2989766f9c5d923789b3fa997b2879f4b-rootfs.mount: Deactivated successfully. Jan 29 16:12:00.768742 containerd[1755]: time="2025-01-29T16:12:00.768699817Z" level=info msg="CreateContainer within sandbox \"2506aa8ba92563beeeebfb896bf9390f6763db3de30c8acc835b3de10075707a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:12:00.804708 containerd[1755]: time="2025-01-29T16:12:00.804667013Z" level=info msg="CreateContainer within sandbox \"2506aa8ba92563beeeebfb896bf9390f6763db3de30c8acc835b3de10075707a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"59f6e0d2898df14173e96a2a4daf4d34f11bc332f512b56c1022fac78e2e659c\"" Jan 29 16:12:00.805588 containerd[1755]: time="2025-01-29T16:12:00.805566054Z" level=info msg="StartContainer for \"59f6e0d2898df14173e96a2a4daf4d34f11bc332f512b56c1022fac78e2e659c\"" Jan 29 16:12:00.831387 systemd[1]: Started cri-containerd-59f6e0d2898df14173e96a2a4daf4d34f11bc332f512b56c1022fac78e2e659c.scope - libcontainer container 59f6e0d2898df14173e96a2a4daf4d34f11bc332f512b56c1022fac78e2e659c. Jan 29 16:12:00.862800 containerd[1755]: time="2025-01-29T16:12:00.862725271Z" level=info msg="StartContainer for \"59f6e0d2898df14173e96a2a4daf4d34f11bc332f512b56c1022fac78e2e659c\" returns successfully" Jan 29 16:12:01.300120 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 16:12:01.310163 kubelet[3498]: E0129 16:12:01.309552 3498 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-g6q5d" podUID="aded3c45-b21b-45f0-bfee-29eb73a91b20" Jan 29 16:12:01.789058 kubelet[3498]: I0129 16:12:01.788905 3498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m7vw4" podStartSLOduration=5.788889438 podStartE2EDuration="5.788889438s" podCreationTimestamp="2025-01-29 16:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:12:01.788750318 +0000 UTC m=+178.573583791" watchObservedRunningTime="2025-01-29 16:12:01.788889438 +0000 UTC m=+178.573722871" Jan 29 16:12:03.310849 kubelet[3498]: E0129 16:12:03.310062 3498 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-g6q5d" podUID="aded3c45-b21b-45f0-bfee-29eb73a91b20" Jan 29 16:12:03.330374 containerd[1755]: time="2025-01-29T16:12:03.330325781Z" level=info msg="StopPodSandbox for \"6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a\"" Jan 29 16:12:03.330681 containerd[1755]: time="2025-01-29T16:12:03.330433421Z" level=info msg="TearDown network for sandbox \"6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a\" successfully" Jan 29 16:12:03.330681 containerd[1755]: time="2025-01-29T16:12:03.330445821Z" level=info msg="StopPodSandbox for \"6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a\" returns successfully" Jan 29 16:12:03.331246 containerd[1755]: time="2025-01-29T16:12:03.331201622Z" level=info msg="RemovePodSandbox for \"6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a\"" Jan 29 16:12:03.331246 containerd[1755]: time="2025-01-29T16:12:03.331231622Z" level=info msg="Forcibly stopping sandbox \"6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a\"" Jan 29 16:12:03.331354 containerd[1755]: time="2025-01-29T16:12:03.331281262Z" level=info msg="TearDown network for sandbox \"6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a\" successfully" Jan 29 16:12:03.343573 containerd[1755]: time="2025-01-29T16:12:03.343522954Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:12:03.343665 containerd[1755]: time="2025-01-29T16:12:03.343593474Z" level=info msg="RemovePodSandbox \"6719fb7bf21c4f73e021fe934c67c192bd7454eae4409c0ea18bb3dca853b31a\" returns successfully" Jan 29 16:12:03.344346 containerd[1755]: time="2025-01-29T16:12:03.344201035Z" level=info msg="StopPodSandbox for \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\"" Jan 29 16:12:03.344346 containerd[1755]: time="2025-01-29T16:12:03.344281315Z" level=info msg="TearDown network for sandbox \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\" successfully" Jan 29 16:12:03.344346 containerd[1755]: time="2025-01-29T16:12:03.344291675Z" level=info msg="StopPodSandbox for \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\" returns successfully" Jan 29 16:12:03.345830 containerd[1755]: time="2025-01-29T16:12:03.344720595Z" level=info msg="RemovePodSandbox for \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\"" Jan 29 16:12:03.345830 containerd[1755]: time="2025-01-29T16:12:03.344747555Z" level=info msg="Forcibly stopping sandbox \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\"" Jan 29 16:12:03.345830 containerd[1755]: time="2025-01-29T16:12:03.344791115Z" level=info msg="TearDown network for sandbox \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\" successfully" Jan 29 16:12:03.350764 containerd[1755]: time="2025-01-29T16:12:03.350733761Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:12:03.350926 containerd[1755]: time="2025-01-29T16:12:03.350906561Z" level=info msg="RemovePodSandbox \"f653b614770083c34fe2ae47297ce94bac4226658e84f13d7f6442857bb1a136\" returns successfully" Jan 29 16:12:03.952943 systemd-networkd[1516]: lxc_health: Link UP Jan 29 16:12:03.965646 systemd-networkd[1516]: lxc_health: Gained carrier Jan 29 16:12:05.458241 systemd-networkd[1516]: lxc_health: Gained IPv6LL Jan 29 16:12:06.279230 update_engine[1732]: I20250129 16:12:06.279170 1732 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 16:12:06.279230 update_engine[1732]: I20250129 16:12:06.279223 1732 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 16:12:06.279606 update_engine[1732]: I20250129 16:12:06.279371 1732 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 16:12:06.281372 update_engine[1732]: I20250129 16:12:06.279703 1732 omaha_request_params.cc:62] Current group set to alpha Jan 29 16:12:06.281372 update_engine[1732]: I20250129 16:12:06.279795 1732 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 16:12:06.281372 update_engine[1732]: I20250129 16:12:06.279803 1732 update_attempter.cc:643] Scheduling an action processor start. Jan 29 16:12:06.281372 update_engine[1732]: I20250129 16:12:06.279819 1732 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 16:12:06.281372 update_engine[1732]: I20250129 16:12:06.279845 1732 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 16:12:06.281372 update_engine[1732]: I20250129 16:12:06.279886 1732 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 16:12:06.281372 update_engine[1732]: I20250129 16:12:06.279892 1732 omaha_request_action.cc:272] Request: Jan 29 16:12:06.281372 update_engine[1732]: Jan 29 16:12:06.281372 update_engine[1732]: Jan 29 16:12:06.281372 update_engine[1732]: Jan 29 16:12:06.281372 update_engine[1732]: Jan 29 16:12:06.281372 update_engine[1732]: Jan 29 16:12:06.281372 update_engine[1732]: Jan 29 16:12:06.281372 update_engine[1732]: Jan 29 16:12:06.281372 update_engine[1732]: Jan 29 16:12:06.281372 update_engine[1732]: I20250129 16:12:06.279898 1732 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:12:06.281372 update_engine[1732]: I20250129 16:12:06.280902 1732 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:12:06.281372 update_engine[1732]: I20250129 16:12:06.281312 1732 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:12:06.281992 locksmithd[1799]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 16:12:06.306039 update_engine[1732]: E20250129 16:12:06.305919 1732 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:12:06.306039 update_engine[1732]: I20250129 16:12:06.306013 1732 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 29 16:12:06.823780 systemd[1]: run-containerd-runc-k8s.io-59f6e0d2898df14173e96a2a4daf4d34f11bc332f512b56c1022fac78e2e659c-runc.tJ0Tlc.mount: Deactivated successfully. Jan 29 16:12:08.939387 systemd[1]: run-containerd-runc-k8s.io-59f6e0d2898df14173e96a2a4daf4d34f11bc332f512b56c1022fac78e2e659c-runc.vi06ZR.mount: Deactivated successfully. Jan 29 16:12:11.246382 sshd[5438]: Connection closed by 10.200.16.10 port 40208 Jan 29 16:12:11.246988 sshd-session[5375]: pam_unix(sshd:session): session closed for user core Jan 29 16:12:11.250506 systemd[1]: sshd@24-10.200.20.10:22-10.200.16.10:40208.service: Deactivated successfully. Jan 29 16:12:11.253675 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 16:12:11.254673 systemd-logind[1731]: Session 27 logged out. Waiting for processes to exit. Jan 29 16:12:11.256339 systemd-logind[1731]: Removed session 27.