Jan 14 13:35:02.324577 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 14 13:35:02.324598 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:56:28 -00 2025 Jan 14 13:35:02.324606 kernel: KASLR enabled Jan 14 13:35:02.324612 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 14 13:35:02.324619 kernel: printk: bootconsole [pl11] enabled Jan 14 13:35:02.324624 kernel: efi: EFI v2.7 by EDK II Jan 14 13:35:02.324631 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jan 14 13:35:02.324637 kernel: random: crng init done Jan 14 13:35:02.324643 kernel: secureboot: Secure boot disabled Jan 14 13:35:02.324649 kernel: ACPI: Early table checksum verification disabled Jan 14 13:35:02.324655 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 14 13:35:02.324661 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:35:02.324666 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:35:02.324674 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 14 13:35:02.324681 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:35:02.324687 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:35:02.324693 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:35:02.324701 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:35:02.324707 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:35:02.324713 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:35:02.324719 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 14 13:35:02.324725 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:35:02.324731 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 14 13:35:02.324737 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 14 13:35:02.324743 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 14 13:35:02.324750 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 14 13:35:02.324756 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 14 13:35:02.324762 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 14 13:35:02.324770 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 14 13:35:02.324776 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 14 13:35:02.324782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 14 13:35:02.324788 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 14 13:35:02.324794 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 14 13:35:02.324800 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 14 13:35:02.324806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 14 13:35:02.324812 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 14 13:35:02.324818 kernel: Zone ranges: Jan 14 13:35:02.324824 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 14 13:35:02.324829 kernel: DMA32 empty Jan 14 13:35:02.326863 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 14 13:35:02.326894 kernel: Movable zone start for each node Jan 14 13:35:02.326902 kernel: Early memory node ranges Jan 14 13:35:02.326909 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 14 13:35:02.326915 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jan 14 13:35:02.326922 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jan 14 13:35:02.326930 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jan 14 13:35:02.326937 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 14 13:35:02.326943 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 14 13:35:02.326950 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 14 13:35:02.326956 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 14 13:35:02.326963 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 14 13:35:02.326970 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 14 13:35:02.326977 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 14 13:35:02.326983 kernel: psci: probing for conduit method from ACPI. Jan 14 13:35:02.326990 kernel: psci: PSCIv1.1 detected in firmware. Jan 14 13:35:02.326996 kernel: psci: Using standard PSCI v0.2 function IDs Jan 14 13:35:02.327003 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 14 13:35:02.327011 kernel: psci: SMC Calling Convention v1.4 Jan 14 13:35:02.327018 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 14 13:35:02.327024 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 14 13:35:02.327031 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 14 13:35:02.327037 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 14 13:35:02.327044 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 14 13:35:02.327051 kernel: Detected PIPT I-cache on CPU0 Jan 14 13:35:02.327057 kernel: CPU features: detected: GIC system register CPU interface Jan 14 13:35:02.327064 kernel: CPU features: detected: Hardware dirty bit management Jan 14 13:35:02.327070 kernel: CPU features: detected: Spectre-BHB Jan 14 13:35:02.327077 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 14 13:35:02.327085 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 14 13:35:02.327091 kernel: CPU features: detected: ARM erratum 1418040 Jan 14 13:35:02.327098 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 14 13:35:02.327104 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 14 13:35:02.327111 kernel: alternatives: applying boot alternatives Jan 14 13:35:02.327118 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 14 13:35:02.327125 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 13:35:02.327132 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:35:02.327139 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 13:35:02.327145 kernel: Fallback order for Node 0: 0 Jan 14 13:35:02.327151 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 14 13:35:02.327159 kernel: Policy zone: Normal Jan 14 13:35:02.327166 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:35:02.327172 kernel: software IO TLB: area num 2. Jan 14 13:35:02.327179 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jan 14 13:35:02.327186 kernel: Memory: 3982056K/4194160K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 212104K reserved, 0K cma-reserved) Jan 14 13:35:02.327193 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 13:35:02.327199 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:35:02.327206 kernel: rcu: RCU event tracing is enabled. Jan 14 13:35:02.327213 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 13:35:02.327220 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:35:02.327227 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:35:02.327235 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:35:02.327241 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 13:35:02.327248 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 14 13:35:02.327255 kernel: GICv3: 960 SPIs implemented Jan 14 13:35:02.327261 kernel: GICv3: 0 Extended SPIs implemented Jan 14 13:35:02.327268 kernel: Root IRQ handler: gic_handle_irq Jan 14 13:35:02.327274 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 14 13:35:02.327281 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 14 13:35:02.327287 kernel: ITS: No ITS available, not enabling LPIs Jan 14 13:35:02.327294 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:35:02.327301 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 14 13:35:02.327307 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 14 13:35:02.327316 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 14 13:35:02.327323 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 14 13:35:02.327329 kernel: Console: colour dummy device 80x25 Jan 14 13:35:02.327336 kernel: printk: console [tty1] enabled Jan 14 13:35:02.327343 kernel: ACPI: Core revision 20230628 Jan 14 13:35:02.327350 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 14 13:35:02.327357 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:35:02.327364 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 13:35:02.327371 kernel: landlock: Up and running. Jan 14 13:35:02.327379 kernel: SELinux: Initializing. Jan 14 13:35:02.327386 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 13:35:02.327393 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 13:35:02.327399 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:35:02.327406 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:35:02.327413 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 14 13:35:02.327420 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 14 13:35:02.327433 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 13:35:02.327440 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:35:02.327448 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:35:02.327455 kernel: Remapping and enabling EFI services. Jan 14 13:35:02.327462 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:35:02.327470 kernel: Detected PIPT I-cache on CPU1 Jan 14 13:35:02.327477 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 14 13:35:02.327485 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 14 13:35:02.327492 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 14 13:35:02.327499 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:35:02.327508 kernel: SMP: Total of 2 processors activated. Jan 14 13:35:02.327515 kernel: CPU features: detected: 32-bit EL0 Support Jan 14 13:35:02.327522 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 14 13:35:02.327530 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 14 13:35:02.327537 kernel: CPU features: detected: CRC32 instructions Jan 14 13:35:02.327544 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 14 13:35:02.327551 kernel: CPU features: detected: LSE atomic instructions Jan 14 13:35:02.327559 kernel: CPU features: detected: Privileged Access Never Jan 14 13:35:02.327566 kernel: CPU: All CPU(s) started at EL1 Jan 14 13:35:02.327574 kernel: alternatives: applying system-wide alternatives Jan 14 13:35:02.327581 kernel: devtmpfs: initialized Jan 14 13:35:02.327588 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:35:02.327595 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 13:35:02.327602 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:35:02.327609 kernel: SMBIOS 3.1.0 present. Jan 14 13:35:02.327617 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 14 13:35:02.327624 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:35:02.327631 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 14 13:35:02.327640 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 14 13:35:02.327647 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 14 13:35:02.327654 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:35:02.327662 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 14 13:35:02.327669 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:35:02.327676 kernel: cpuidle: using governor menu Jan 14 13:35:02.327683 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 14 13:35:02.327690 kernel: ASID allocator initialised with 32768 entries Jan 14 13:35:02.327697 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:35:02.327706 kernel: Serial: AMBA PL011 UART driver Jan 14 13:35:02.327713 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 14 13:35:02.327721 kernel: Modules: 0 pages in range for non-PLT usage Jan 14 13:35:02.327728 kernel: Modules: 508880 pages in range for PLT usage Jan 14 13:35:02.327735 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:35:02.327742 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:35:02.327749 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 14 13:35:02.327756 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 14 13:35:02.327763 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:35:02.327772 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:35:02.327779 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 14 13:35:02.327786 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 14 13:35:02.327793 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:35:02.327800 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:35:02.327807 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 13:35:02.327814 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:35:02.327821 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:35:02.327829 kernel: ACPI: Interpreter enabled Jan 14 13:35:02.327848 kernel: ACPI: Using GIC for interrupt routing Jan 14 13:35:02.327856 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 14 13:35:02.327863 kernel: printk: console [ttyAMA0] enabled Jan 14 13:35:02.327870 kernel: printk: bootconsole [pl11] disabled Jan 14 13:35:02.327877 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 14 13:35:02.327884 kernel: iommu: Default domain type: Translated Jan 14 13:35:02.327891 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 14 13:35:02.327898 kernel: efivars: Registered efivars operations Jan 14 13:35:02.327905 kernel: vgaarb: loaded Jan 14 13:35:02.327915 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 14 13:35:02.327922 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:35:02.327929 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:35:02.327936 kernel: pnp: PnP ACPI init Jan 14 13:35:02.327943 kernel: pnp: PnP ACPI: found 0 devices Jan 14 13:35:02.327950 kernel: NET: Registered PF_INET protocol family Jan 14 13:35:02.327957 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:35:02.327964 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 13:35:02.327971 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:35:02.327980 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 13:35:02.327987 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 13:35:02.327994 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 13:35:02.328002 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 13:35:02.328009 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 13:35:02.328016 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:35:02.328023 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:35:02.328030 kernel: kvm [1]: HYP mode not available Jan 14 13:35:02.328038 kernel: Initialise system trusted keyrings Jan 14 13:35:02.328046 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 13:35:02.328054 kernel: Key type asymmetric registered Jan 14 13:35:02.328060 kernel: Asymmetric key parser 'x509' registered Jan 14 13:35:02.328068 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 13:35:02.328075 kernel: io scheduler mq-deadline registered Jan 14 13:35:02.328082 kernel: io scheduler kyber registered Jan 14 13:35:02.328089 kernel: io scheduler bfq registered Jan 14 13:35:02.328096 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:35:02.328103 kernel: thunder_xcv, ver 1.0 Jan 14 13:35:02.328111 kernel: thunder_bgx, ver 1.0 Jan 14 13:35:02.328118 kernel: nicpf, ver 1.0 Jan 14 13:35:02.328125 kernel: nicvf, ver 1.0 Jan 14 13:35:02.328281 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 14 13:35:02.328351 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-14T13:35:01 UTC (1736861701) Jan 14 13:35:02.328361 kernel: efifb: probing for efifb Jan 14 13:35:02.328368 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 13:35:02.328375 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 13:35:02.328385 kernel: efifb: scrolling: redraw Jan 14 13:35:02.328392 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:35:02.328399 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:35:02.328406 kernel: fb0: EFI VGA frame buffer device Jan 14 13:35:02.328413 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 14 13:35:02.328420 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:35:02.328427 kernel: No ACPI PMU IRQ for CPU0 Jan 14 13:35:02.328434 kernel: No ACPI PMU IRQ for CPU1 Jan 14 13:35:02.328442 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 14 13:35:02.328450 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 14 13:35:02.328458 kernel: watchdog: Hard watchdog permanently disabled Jan 14 13:35:02.328465 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:35:02.328472 kernel: Segment Routing with IPv6 Jan 14 13:35:02.328479 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:35:02.328487 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:35:02.328494 kernel: Key type dns_resolver registered Jan 14 13:35:02.328501 kernel: registered taskstats version 1 Jan 14 13:35:02.328507 kernel: Loading compiled-in X.509 certificates Jan 14 13:35:02.328516 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 46cb4d1b22f3a5974766fe7d7b651e2f296d4fe0' Jan 14 13:35:02.328523 kernel: Key type .fscrypt registered Jan 14 13:35:02.328530 kernel: Key type fscrypt-provisioning registered Jan 14 13:35:02.328537 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:35:02.328544 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:35:02.328551 kernel: ima: No architecture policies found Jan 14 13:35:02.328558 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 14 13:35:02.328565 kernel: clk: Disabling unused clocks Jan 14 13:35:02.328572 kernel: Freeing unused kernel memory: 39936K Jan 14 13:35:02.328580 kernel: Run /init as init process Jan 14 13:35:02.328587 kernel: with arguments: Jan 14 13:35:02.328594 kernel: /init Jan 14 13:35:02.328601 kernel: with environment: Jan 14 13:35:02.328608 kernel: HOME=/ Jan 14 13:35:02.328615 kernel: TERM=linux Jan 14 13:35:02.328622 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 13:35:02.328630 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:35:02.328641 systemd[1]: Detected virtualization microsoft. Jan 14 13:35:02.328649 systemd[1]: Detected architecture arm64. Jan 14 13:35:02.328656 systemd[1]: Running in initrd. Jan 14 13:35:02.328664 systemd[1]: No hostname configured, using default hostname. Jan 14 13:35:02.328671 systemd[1]: Hostname set to . Jan 14 13:35:02.328680 systemd[1]: Initializing machine ID from random generator. Jan 14 13:35:02.328687 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:35:02.328696 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:35:02.328705 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:35:02.328714 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:35:02.328722 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:35:02.328729 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:35:02.328738 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:35:02.328747 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 13:35:02.328757 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 13:35:02.328765 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:35:02.328773 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:35:02.328780 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:35:02.328788 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:35:02.328796 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:35:02.328804 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:35:02.328811 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:35:02.328819 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:35:02.328829 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:35:02.330909 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:35:02.330930 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:35:02.330939 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:35:02.330947 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:35:02.330955 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:35:02.330964 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:35:02.330972 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:35:02.330985 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:35:02.330993 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:35:02.331001 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:35:02.331040 systemd-journald[218]: Collecting audit messages is disabled. Jan 14 13:35:02.331063 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:35:02.331071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:35:02.331080 systemd-journald[218]: Journal started Jan 14 13:35:02.331102 systemd-journald[218]: Runtime Journal (/run/log/journal/0b6c9f7081fc49ca8dbfe45cf667486b) is 8.0M, max 78.5M, 70.5M free. Jan 14 13:35:02.337867 systemd-modules-load[219]: Inserted module 'overlay' Jan 14 13:35:02.372614 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:35:02.372668 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:35:02.378612 kernel: Bridge firewalling registered Jan 14 13:35:02.382049 systemd-modules-load[219]: Inserted module 'br_netfilter' Jan 14 13:35:02.387092 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:35:02.393571 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:35:02.406965 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:35:02.418649 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:35:02.429293 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:35:02.450103 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:35:02.458997 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:35:02.487009 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:35:02.500018 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:35:02.515865 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:35:02.538523 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:35:02.544727 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:35:02.571332 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:35:02.580016 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:35:02.593985 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:35:02.606802 dracut-cmdline[251]: dracut-dracut-053 Jan 14 13:35:02.622034 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 14 13:35:02.619729 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:35:02.674031 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:35:02.708570 systemd-resolved[272]: Positive Trust Anchors: Jan 14 13:35:02.708594 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:35:02.708625 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:35:02.710710 systemd-resolved[272]: Defaulting to hostname 'linux'. Jan 14 13:35:02.711596 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:35:02.727978 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:35:02.823890 kernel: SCSI subsystem initialized Jan 14 13:35:02.831859 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:35:02.842867 kernel: iscsi: registered transport (tcp) Jan 14 13:35:02.859929 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:35:02.859996 kernel: QLogic iSCSI HBA Driver Jan 14 13:35:02.892092 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:35:02.906139 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:35:02.938722 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:35:02.938768 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:35:02.945314 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 13:35:03.004868 kernel: raid6: neonx8 gen() 15774 MB/s Jan 14 13:35:03.013859 kernel: raid6: neonx4 gen() 15826 MB/s Jan 14 13:35:03.033849 kernel: raid6: neonx2 gen() 13201 MB/s Jan 14 13:35:03.054854 kernel: raid6: neonx1 gen() 10535 MB/s Jan 14 13:35:03.074848 kernel: raid6: int64x8 gen() 6795 MB/s Jan 14 13:35:03.094850 kernel: raid6: int64x4 gen() 7354 MB/s Jan 14 13:35:03.115849 kernel: raid6: int64x2 gen() 6111 MB/s Jan 14 13:35:03.139644 kernel: raid6: int64x1 gen() 5058 MB/s Jan 14 13:35:03.139657 kernel: raid6: using algorithm neonx4 gen() 15826 MB/s Jan 14 13:35:03.164396 kernel: raid6: .... xor() 12432 MB/s, rmw enabled Jan 14 13:35:03.164417 kernel: raid6: using neon recovery algorithm Jan 14 13:35:03.176807 kernel: xor: measuring software checksum speed Jan 14 13:35:03.176821 kernel: 8regs : 21641 MB/sec Jan 14 13:35:03.180281 kernel: 32regs : 21664 MB/sec Jan 14 13:35:03.183890 kernel: arm64_neon : 27936 MB/sec Jan 14 13:35:03.187998 kernel: xor: using function: arm64_neon (27936 MB/sec) Jan 14 13:35:03.237862 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:35:03.248196 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:35:03.264000 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:35:03.286598 systemd-udevd[438]: Using default interface naming scheme 'v255'. Jan 14 13:35:03.292266 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:35:03.315131 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:35:03.331655 dracut-pre-trigger[457]: rd.md=0: removing MD RAID activation Jan 14 13:35:03.358634 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:35:03.373982 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:35:03.413116 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:35:03.436043 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:35:03.461710 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:35:03.473531 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:35:03.498939 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:35:03.515685 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:35:03.537459 kernel: hv_vmbus: Vmbus version:5.3 Jan 14 13:35:03.541067 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:35:03.577713 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 13:35:03.577735 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 13:35:03.577744 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 13:35:03.577753 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 14 13:35:03.583983 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:35:03.628750 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 13:35:03.628770 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 13:35:03.628902 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 14 13:35:03.628914 kernel: PTP clock support registered Jan 14 13:35:03.628923 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 13:35:03.621576 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:35:03.637907 kernel: scsi host0: storvsc_host_t Jan 14 13:35:03.621752 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:35:03.666893 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 13:35:03.666964 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 13:35:03.666978 kernel: scsi host1: storvsc_host_t Jan 14 13:35:03.663639 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:35:03.673938 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:35:03.706603 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 13:35:03.674241 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:35:03.687794 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:35:03.720305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:35:03.749125 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 13:35:03.749146 kernel: hv_vmbus: registering driver hv_utils Jan 14 13:35:03.757879 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 13:35:03.757927 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 13:35:03.757938 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 13:35:03.560331 systemd-resolved[272]: Clock change detected. Flushing caches. Jan 14 13:35:03.603641 kernel: hv_netvsc 000d3afb-f71d-000d-3afb-f71d000d3afb eth0: VF slot 1 added Jan 14 13:35:03.603771 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 13:35:03.603869 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:35:03.603880 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 13:35:03.603966 systemd-journald[218]: Time jumped backwards, rotating. Jan 14 13:35:03.585085 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:35:03.616551 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:35:03.631532 kernel: hv_vmbus: registering driver hv_pci Jan 14 13:35:03.645191 kernel: hv_pci fb0c8227-4fdc-4212-a89d-8a3b690ea318: PCI VMBus probing: Using version 0x10004 Jan 14 13:35:03.750201 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 13:35:03.750339 kernel: hv_pci fb0c8227-4fdc-4212-a89d-8a3b690ea318: PCI host bridge to bus 4fdc:00 Jan 14 13:35:03.750445 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 13:35:03.750546 kernel: pci_bus 4fdc:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 14 13:35:03.750650 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 13:35:03.750736 kernel: pci_bus 4fdc:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 13:35:03.750809 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 13:35:03.750888 kernel: pci 4fdc:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 14 13:35:03.750993 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 13:35:03.751075 kernel: pci 4fdc:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 14 13:35:03.751154 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:35:03.751163 kernel: pci 4fdc:00:02.0: enabling Extended Tags Jan 14 13:35:03.751241 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 13:35:03.751321 kernel: pci 4fdc:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4fdc:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 14 13:35:03.751557 kernel: pci_bus 4fdc:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 13:35:03.751638 kernel: pci 4fdc:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 14 13:35:03.708614 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:35:03.795437 kernel: mlx5_core 4fdc:00:02.0: enabling device (0000 -> 0002) Jan 14 13:35:04.010913 kernel: mlx5_core 4fdc:00:02.0: firmware version: 16.30.1284 Jan 14 13:35:04.011055 kernel: hv_netvsc 000d3afb-f71d-000d-3afb-f71d000d3afb eth0: VF registering: eth1 Jan 14 13:35:04.011148 kernel: mlx5_core 4fdc:00:02.0 eth1: joined to eth0 Jan 14 13:35:04.011237 kernel: mlx5_core 4fdc:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 14 13:35:04.018388 kernel: mlx5_core 4fdc:00:02.0 enP20444s1: renamed from eth1 Jan 14 13:35:04.269251 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 13:35:04.391130 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 13:35:04.412370 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (507) Jan 14 13:35:04.424407 kernel: BTRFS: device fsid 2be7cc1c-29d4-4496-b29b-8561323213d2 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (497) Jan 14 13:35:04.434727 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:35:04.446566 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 13:35:04.455572 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 13:35:04.491500 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:35:04.514392 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:35:04.521362 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:35:05.531423 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:35:05.531847 disk-uuid[607]: The operation has completed successfully. Jan 14 13:35:05.590380 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:35:05.590476 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:35:05.617487 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 13:35:05.631169 sh[693]: Success Jan 14 13:35:05.661420 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 14 13:35:05.866591 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:35:05.885028 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 13:35:05.894889 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 13:35:05.925228 kernel: BTRFS info (device dm-0): first mount of filesystem 2be7cc1c-29d4-4496-b29b-8561323213d2 Jan 14 13:35:05.925267 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 14 13:35:05.925277 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 13:35:05.937382 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:35:05.941536 kernel: BTRFS info (device dm-0): using free space tree Jan 14 13:35:06.227780 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 13:35:06.233681 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:35:06.254620 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:35:06.273642 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:35:06.299297 kernel: BTRFS info (device sda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 14 13:35:06.299318 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 13:35:06.299327 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:35:06.323975 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:35:06.330667 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 13:35:06.344147 kernel: BTRFS info (device sda6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 14 13:35:06.351009 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:35:06.368109 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:35:06.421163 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:35:06.439498 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:35:06.467310 systemd-networkd[877]: lo: Link UP Jan 14 13:35:06.471102 systemd-networkd[877]: lo: Gained carrier Jan 14 13:35:06.472811 systemd-networkd[877]: Enumeration completed Jan 14 13:35:06.476402 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:35:06.482696 systemd[1]: Reached target network.target - Network. Jan 14 13:35:06.486626 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:35:06.486629 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:35:06.569370 kernel: mlx5_core 4fdc:00:02.0 enP20444s1: Link up Jan 14 13:35:06.608409 kernel: hv_netvsc 000d3afb-f71d-000d-3afb-f71d000d3afb eth0: Data path switched to VF: enP20444s1 Jan 14 13:35:06.608115 systemd-networkd[877]: enP20444s1: Link UP Jan 14 13:35:06.608199 systemd-networkd[877]: eth0: Link UP Jan 14 13:35:06.608301 systemd-networkd[877]: eth0: Gained carrier Jan 14 13:35:06.608311 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:35:06.617598 systemd-networkd[877]: enP20444s1: Gained carrier Jan 14 13:35:06.648403 systemd-networkd[877]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 14 13:35:07.380667 ignition[808]: Ignition 2.20.0 Jan 14 13:35:07.380679 ignition[808]: Stage: fetch-offline Jan 14 13:35:07.385256 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:35:07.380715 ignition[808]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:07.380723 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:35:07.380806 ignition[808]: parsed url from cmdline: "" Jan 14 13:35:07.380809 ignition[808]: no config URL provided Jan 14 13:35:07.380813 ignition[808]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:35:07.413620 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:35:07.380819 ignition[808]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:35:07.380823 ignition[808]: failed to fetch config: resource requires networking Jan 14 13:35:07.380993 ignition[808]: Ignition finished successfully Jan 14 13:35:07.444713 ignition[886]: Ignition 2.20.0 Jan 14 13:35:07.444722 ignition[886]: Stage: fetch Jan 14 13:35:07.444970 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:07.444980 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:35:07.445449 ignition[886]: parsed url from cmdline: "" Jan 14 13:35:07.445456 ignition[886]: no config URL provided Jan 14 13:35:07.445462 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:35:07.445471 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:35:07.445505 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 13:35:07.564043 ignition[886]: GET result: OK Jan 14 13:35:07.564125 ignition[886]: config has been read from IMDS userdata Jan 14 13:35:07.564172 ignition[886]: parsing config with SHA512: 97aeab3800aace747c7371d0042c7940e71f3d13bb1c1138345cd2de1e041bb53a6b1a091b484d4436943687bc70c2bbc889fda5d020aedfd24f3575dd9fac31 Jan 14 13:35:07.568876 unknown[886]: fetched base config from "system" Jan 14 13:35:07.569347 ignition[886]: fetch: fetch complete Jan 14 13:35:07.568883 unknown[886]: fetched base config from "system" Jan 14 13:35:07.569368 ignition[886]: fetch: fetch passed Jan 14 13:35:07.568888 unknown[886]: fetched user config from "azure" Jan 14 13:35:07.569410 ignition[886]: Ignition finished successfully Jan 14 13:35:07.574088 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:35:07.606493 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:35:07.623597 ignition[892]: Ignition 2.20.0 Jan 14 13:35:07.623611 ignition[892]: Stage: kargs Jan 14 13:35:07.627595 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:35:07.623775 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:07.623784 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:35:07.624706 ignition[892]: kargs: kargs passed Jan 14 13:35:07.624749 ignition[892]: Ignition finished successfully Jan 14 13:35:07.654603 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:35:07.676010 ignition[898]: Ignition 2.20.0 Jan 14 13:35:07.676020 ignition[898]: Stage: disks Jan 14 13:35:07.681397 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:35:07.676172 ignition[898]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:07.690097 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:35:07.676180 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:35:07.696421 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:35:07.677056 ignition[898]: disks: disks passed Jan 14 13:35:07.707970 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:35:07.677097 ignition[898]: Ignition finished successfully Jan 14 13:35:07.717992 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:35:07.729887 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:35:07.756562 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:35:07.972145 systemd-fsck[906]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 13:35:07.981730 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:35:08.002485 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:35:08.057374 kernel: EXT4-fs (sda9): mounted filesystem f9a95e53-2d63-4443-b523-cb2108fb48f6 r/w with ordered data mode. Quota mode: none. Jan 14 13:35:08.058452 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:35:08.063276 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:35:08.079493 systemd-networkd[877]: eth0: Gained IPv6LL Jan 14 13:35:08.118483 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:35:08.131015 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:35:08.147374 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (917) Jan 14 13:35:08.161828 kernel: BTRFS info (device sda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 14 13:35:08.161866 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 13:35:08.165810 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:35:08.166592 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 13:35:08.173084 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:35:08.210259 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:35:08.173115 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:35:08.180684 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:35:08.199540 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:35:08.208317 systemd-networkd[877]: enP20444s1: Gained IPv6LL Jan 14 13:35:08.223267 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:35:08.738161 coreos-metadata[919]: Jan 14 13:35:08.737 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:35:08.748392 coreos-metadata[919]: Jan 14 13:35:08.748 INFO Fetch successful Jan 14 13:35:08.753754 coreos-metadata[919]: Jan 14 13:35:08.748 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:35:08.765519 coreos-metadata[919]: Jan 14 13:35:08.765 INFO Fetch successful Jan 14 13:35:08.780389 coreos-metadata[919]: Jan 14 13:35:08.780 INFO wrote hostname ci-4186.1.0-a-8a230934f7 to /sysroot/etc/hostname Jan 14 13:35:08.791017 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:35:08.987036 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 13:35:09.062338 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Jan 14 13:35:09.070841 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 13:35:09.079275 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 13:35:10.032806 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:35:10.048523 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:35:10.055831 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:35:10.082310 kernel: BTRFS info (device sda6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 14 13:35:10.076541 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:35:10.102380 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:35:10.119660 ignition[1042]: INFO : Ignition 2.20.0 Jan 14 13:35:10.119660 ignition[1042]: INFO : Stage: mount Jan 14 13:35:10.119660 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:10.119660 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:35:10.119660 ignition[1042]: INFO : mount: mount passed Jan 14 13:35:10.119660 ignition[1042]: INFO : Ignition finished successfully Jan 14 13:35:10.119983 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:35:10.147538 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:35:10.170590 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:35:10.201387 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1052) Jan 14 13:35:10.201427 kernel: BTRFS info (device sda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 14 13:35:10.207501 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 13:35:10.211760 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:35:10.218382 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:35:10.219385 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:35:10.241542 ignition[1070]: INFO : Ignition 2.20.0 Jan 14 13:35:10.241542 ignition[1070]: INFO : Stage: files Jan 14 13:35:10.249668 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:10.249668 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:35:10.249668 ignition[1070]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:35:10.268046 ignition[1070]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:35:10.268046 ignition[1070]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:35:10.345020 ignition[1070]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:35:10.353007 ignition[1070]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:35:10.353007 ignition[1070]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:35:10.345424 unknown[1070]: wrote ssh authorized keys file for user: core Jan 14 13:35:10.373135 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 14 13:35:10.373135 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 14 13:35:10.558390 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 13:35:10.679222 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 14 13:35:10.679222 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:35:10.700075 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 14 13:35:11.006500 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 14 13:35:11.078900 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:35:11.078900 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 14 13:35:11.560693 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 14 13:35:12.326430 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 14 13:35:12.326430 ignition[1070]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 14 13:35:12.384986 ignition[1070]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:35:12.400447 ignition[1070]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:35:12.400447 ignition[1070]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 14 13:35:12.400447 ignition[1070]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 14 13:35:12.400447 ignition[1070]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 13:35:12.400447 ignition[1070]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:35:12.400447 ignition[1070]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:35:12.400447 ignition[1070]: INFO : files: files passed Jan 14 13:35:12.400447 ignition[1070]: INFO : Ignition finished successfully Jan 14 13:35:12.404791 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:35:12.444033 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:35:12.458533 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:35:12.542458 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:35:12.542458 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:35:12.486975 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:35:12.579226 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:35:12.487063 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:35:12.497065 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:35:12.513525 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:35:12.542616 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:35:12.591010 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:35:12.591113 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:35:12.604960 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:35:12.618634 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:35:12.631586 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:35:12.651572 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:35:12.687622 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:35:12.710602 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:35:12.733031 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:35:12.733132 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:35:12.746676 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:35:12.759995 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:35:12.772914 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:35:12.784616 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:35:12.784707 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:35:12.802471 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:35:12.815688 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:35:12.827791 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:35:12.839015 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:35:12.852580 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:35:12.865121 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:35:12.877731 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:35:12.891121 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:35:12.904486 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:35:12.915841 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:35:12.926396 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:35:12.926470 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:35:12.942469 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:35:12.954863 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:35:12.967757 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:35:12.974072 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:35:12.981699 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:35:12.981769 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:35:13.001517 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:35:13.001578 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:35:13.014597 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:35:13.014642 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:35:13.025922 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 13:35:13.025978 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:35:13.092186 ignition[1122]: INFO : Ignition 2.20.0 Jan 14 13:35:13.092186 ignition[1122]: INFO : Stage: umount Jan 14 13:35:13.092186 ignition[1122]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:13.092186 ignition[1122]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:35:13.053555 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:35:13.149828 ignition[1122]: INFO : umount: umount passed Jan 14 13:35:13.149828 ignition[1122]: INFO : Ignition finished successfully Jan 14 13:35:13.071797 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:35:13.085066 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:35:13.085128 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:35:13.097210 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:35:13.097265 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:35:13.115107 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:35:13.115196 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:35:13.131362 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:35:13.131424 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:35:13.143710 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:35:13.143774 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:35:13.150009 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:35:13.150072 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:35:13.161202 systemd[1]: Stopped target network.target - Network. Jan 14 13:35:13.172550 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:35:13.172613 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:35:13.184450 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:35:13.196304 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:35:13.202796 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:35:13.210604 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:35:13.222333 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:35:13.235197 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:35:13.235238 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:35:13.246560 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:35:13.246598 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:35:13.258240 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:35:13.258295 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:35:13.269265 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:35:13.269322 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:35:13.281715 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:35:13.299321 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:35:13.310400 systemd-networkd[877]: eth0: DHCPv6 lease lost Jan 14 13:35:13.312093 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:35:13.316659 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:35:13.316794 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:35:13.330082 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:35:13.330171 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:35:13.556037 kernel: hv_netvsc 000d3afb-f71d-000d-3afb-f71d000d3afb eth0: Data path switched from VF: enP20444s1 Jan 14 13:35:13.344230 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:35:13.344290 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:35:13.384557 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:35:13.395156 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:35:13.395229 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:35:13.407228 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:35:13.407279 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:35:13.419031 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:35:13.419077 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:35:13.431579 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:35:13.431622 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:35:13.445029 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:35:13.490195 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:35:13.490385 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:35:13.501625 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:35:13.501668 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:35:13.517901 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:35:13.517934 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:35:13.539042 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:35:13.539099 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:35:13.556114 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:35:13.556162 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:35:13.567701 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:35:13.567749 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:35:13.603868 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:35:13.610369 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:35:13.610436 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:35:13.623599 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 13:35:13.623651 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:35:13.639692 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:35:13.846792 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jan 14 13:35:13.639742 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:35:13.653006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:35:13.653053 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:35:13.671891 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:35:13.671993 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:35:13.684648 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:35:13.684727 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:35:13.702899 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:35:13.702987 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:35:13.723393 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:35:13.735836 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:35:13.735936 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:35:13.770620 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:35:13.789615 systemd[1]: Switching root. Jan 14 13:35:13.929886 systemd-journald[218]: Journal stopped Jan 14 13:35:02.324577 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 14 13:35:02.324598 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:56:28 -00 2025 Jan 14 13:35:02.324606 kernel: KASLR enabled Jan 14 13:35:02.324612 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 14 13:35:02.324619 kernel: printk: bootconsole [pl11] enabled Jan 14 13:35:02.324624 kernel: efi: EFI v2.7 by EDK II Jan 14 13:35:02.324631 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jan 14 13:35:02.324637 kernel: random: crng init done Jan 14 13:35:02.324643 kernel: secureboot: Secure boot disabled Jan 14 13:35:02.324649 kernel: ACPI: Early table checksum verification disabled Jan 14 13:35:02.324655 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 14 13:35:02.324661 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:35:02.324666 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:35:02.324674 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 14 13:35:02.324681 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:35:02.324687 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:35:02.324693 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:35:02.324701 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:35:02.324707 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:35:02.324713 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:35:02.324719 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 14 13:35:02.324725 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:35:02.324731 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 14 13:35:02.324737 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 14 13:35:02.324743 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 14 13:35:02.324750 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 14 13:35:02.324756 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 14 13:35:02.324762 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 14 13:35:02.324770 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 14 13:35:02.324776 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 14 13:35:02.324782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 14 13:35:02.324788 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 14 13:35:02.324794 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 14 13:35:02.324800 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 14 13:35:02.324806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 14 13:35:02.324812 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 14 13:35:02.324818 kernel: Zone ranges: Jan 14 13:35:02.324824 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 14 13:35:02.324829 kernel: DMA32 empty Jan 14 13:35:02.326863 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 14 13:35:02.326894 kernel: Movable zone start for each node Jan 14 13:35:02.326902 kernel: Early memory node ranges Jan 14 13:35:02.326909 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 14 13:35:02.326915 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jan 14 13:35:02.326922 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jan 14 13:35:02.326930 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jan 14 13:35:02.326937 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 14 13:35:02.326943 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 14 13:35:02.326950 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 14 13:35:02.326956 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 14 13:35:02.326963 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 14 13:35:02.326970 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 14 13:35:02.326977 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 14 13:35:02.326983 kernel: psci: probing for conduit method from ACPI. Jan 14 13:35:02.326990 kernel: psci: PSCIv1.1 detected in firmware. Jan 14 13:35:02.326996 kernel: psci: Using standard PSCI v0.2 function IDs Jan 14 13:35:02.327003 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 14 13:35:02.327011 kernel: psci: SMC Calling Convention v1.4 Jan 14 13:35:02.327018 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 14 13:35:02.327024 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 14 13:35:02.327031 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 14 13:35:02.327037 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 14 13:35:02.327044 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 14 13:35:02.327051 kernel: Detected PIPT I-cache on CPU0 Jan 14 13:35:02.327057 kernel: CPU features: detected: GIC system register CPU interface Jan 14 13:35:02.327064 kernel: CPU features: detected: Hardware dirty bit management Jan 14 13:35:02.327070 kernel: CPU features: detected: Spectre-BHB Jan 14 13:35:02.327077 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 14 13:35:02.327085 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 14 13:35:02.327091 kernel: CPU features: detected: ARM erratum 1418040 Jan 14 13:35:02.327098 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 14 13:35:02.327104 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 14 13:35:02.327111 kernel: alternatives: applying boot alternatives Jan 14 13:35:02.327118 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 14 13:35:02.327125 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 13:35:02.327132 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:35:02.327139 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 13:35:02.327145 kernel: Fallback order for Node 0: 0 Jan 14 13:35:02.327151 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 14 13:35:02.327159 kernel: Policy zone: Normal Jan 14 13:35:02.327166 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:35:02.327172 kernel: software IO TLB: area num 2. Jan 14 13:35:02.327179 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jan 14 13:35:02.327186 kernel: Memory: 3982056K/4194160K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 212104K reserved, 0K cma-reserved) Jan 14 13:35:02.327193 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 13:35:02.327199 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:35:02.327206 kernel: rcu: RCU event tracing is enabled. Jan 14 13:35:02.327213 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 13:35:02.327220 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:35:02.327227 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:35:02.327235 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:35:02.327241 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 13:35:02.327248 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 14 13:35:02.327255 kernel: GICv3: 960 SPIs implemented Jan 14 13:35:02.327261 kernel: GICv3: 0 Extended SPIs implemented Jan 14 13:35:02.327268 kernel: Root IRQ handler: gic_handle_irq Jan 14 13:35:02.327274 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 14 13:35:02.327281 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 14 13:35:02.327287 kernel: ITS: No ITS available, not enabling LPIs Jan 14 13:35:02.327294 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:35:02.327301 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 14 13:35:02.327307 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 14 13:35:02.327316 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 14 13:35:02.327323 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 14 13:35:02.327329 kernel: Console: colour dummy device 80x25 Jan 14 13:35:02.327336 kernel: printk: console [tty1] enabled Jan 14 13:35:02.327343 kernel: ACPI: Core revision 20230628 Jan 14 13:35:02.327350 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 14 13:35:02.327357 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:35:02.327364 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 13:35:02.327371 kernel: landlock: Up and running. Jan 14 13:35:02.327379 kernel: SELinux: Initializing. Jan 14 13:35:02.327386 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 13:35:02.327393 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 13:35:02.327399 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:35:02.327406 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:35:02.327413 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 14 13:35:02.327420 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 14 13:35:02.327433 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 13:35:02.327440 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:35:02.327448 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:35:02.327455 kernel: Remapping and enabling EFI services. Jan 14 13:35:02.327462 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:35:02.327470 kernel: Detected PIPT I-cache on CPU1 Jan 14 13:35:02.327477 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 14 13:35:02.327485 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 14 13:35:02.327492 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 14 13:35:02.327499 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:35:02.327508 kernel: SMP: Total of 2 processors activated. Jan 14 13:35:02.327515 kernel: CPU features: detected: 32-bit EL0 Support Jan 14 13:35:02.327522 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 14 13:35:02.327530 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 14 13:35:02.327537 kernel: CPU features: detected: CRC32 instructions Jan 14 13:35:02.327544 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 14 13:35:02.327551 kernel: CPU features: detected: LSE atomic instructions Jan 14 13:35:02.327559 kernel: CPU features: detected: Privileged Access Never Jan 14 13:35:02.327566 kernel: CPU: All CPU(s) started at EL1 Jan 14 13:35:02.327574 kernel: alternatives: applying system-wide alternatives Jan 14 13:35:02.327581 kernel: devtmpfs: initialized Jan 14 13:35:02.327588 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:35:02.327595 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 13:35:02.327602 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:35:02.327609 kernel: SMBIOS 3.1.0 present. Jan 14 13:35:02.327617 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 14 13:35:02.327624 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:35:02.327631 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 14 13:35:02.327640 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 14 13:35:02.327647 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 14 13:35:02.327654 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:35:02.327662 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 14 13:35:02.327669 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:35:02.327676 kernel: cpuidle: using governor menu Jan 14 13:35:02.327683 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 14 13:35:02.327690 kernel: ASID allocator initialised with 32768 entries Jan 14 13:35:02.327697 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:35:02.327706 kernel: Serial: AMBA PL011 UART driver Jan 14 13:35:02.327713 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 14 13:35:02.327721 kernel: Modules: 0 pages in range for non-PLT usage Jan 14 13:35:02.327728 kernel: Modules: 508880 pages in range for PLT usage Jan 14 13:35:02.327735 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:35:02.327742 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:35:02.327749 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 14 13:35:02.327756 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 14 13:35:02.327763 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:35:02.327772 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:35:02.327779 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 14 13:35:02.327786 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 14 13:35:02.327793 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:35:02.327800 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:35:02.327807 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 13:35:02.327814 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:35:02.327821 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:35:02.327829 kernel: ACPI: Interpreter enabled Jan 14 13:35:02.327848 kernel: ACPI: Using GIC for interrupt routing Jan 14 13:35:02.327856 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 14 13:35:02.327863 kernel: printk: console [ttyAMA0] enabled Jan 14 13:35:02.327870 kernel: printk: bootconsole [pl11] disabled Jan 14 13:35:02.327877 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 14 13:35:02.327884 kernel: iommu: Default domain type: Translated Jan 14 13:35:02.327891 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 14 13:35:02.327898 kernel: efivars: Registered efivars operations Jan 14 13:35:02.327905 kernel: vgaarb: loaded Jan 14 13:35:02.327915 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 14 13:35:02.327922 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:35:02.327929 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:35:02.327936 kernel: pnp: PnP ACPI init Jan 14 13:35:02.327943 kernel: pnp: PnP ACPI: found 0 devices Jan 14 13:35:02.327950 kernel: NET: Registered PF_INET protocol family Jan 14 13:35:02.327957 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:35:02.327964 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 13:35:02.327971 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:35:02.327980 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 13:35:02.327987 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 13:35:02.327994 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 13:35:02.328002 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 13:35:02.328009 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 13:35:02.328016 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:35:02.328023 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:35:02.328030 kernel: kvm [1]: HYP mode not available Jan 14 13:35:02.328038 kernel: Initialise system trusted keyrings Jan 14 13:35:02.328046 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 13:35:02.328054 kernel: Key type asymmetric registered Jan 14 13:35:02.328060 kernel: Asymmetric key parser 'x509' registered Jan 14 13:35:02.328068 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 13:35:02.328075 kernel: io scheduler mq-deadline registered Jan 14 13:35:02.328082 kernel: io scheduler kyber registered Jan 14 13:35:02.328089 kernel: io scheduler bfq registered Jan 14 13:35:02.328096 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:35:02.328103 kernel: thunder_xcv, ver 1.0 Jan 14 13:35:02.328111 kernel: thunder_bgx, ver 1.0 Jan 14 13:35:02.328118 kernel: nicpf, ver 1.0 Jan 14 13:35:02.328125 kernel: nicvf, ver 1.0 Jan 14 13:35:02.328281 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 14 13:35:02.328351 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-14T13:35:01 UTC (1736861701) Jan 14 13:35:02.328361 kernel: efifb: probing for efifb Jan 14 13:35:02.328368 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 13:35:02.328375 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 13:35:02.328385 kernel: efifb: scrolling: redraw Jan 14 13:35:02.328392 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:35:02.328399 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:35:02.328406 kernel: fb0: EFI VGA frame buffer device Jan 14 13:35:02.328413 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 14 13:35:02.328420 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:35:02.328427 kernel: No ACPI PMU IRQ for CPU0 Jan 14 13:35:02.328434 kernel: No ACPI PMU IRQ for CPU1 Jan 14 13:35:02.328442 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 14 13:35:02.328450 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 14 13:35:02.328458 kernel: watchdog: Hard watchdog permanently disabled Jan 14 13:35:02.328465 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:35:02.328472 kernel: Segment Routing with IPv6 Jan 14 13:35:02.328479 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:35:02.328487 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:35:02.328494 kernel: Key type dns_resolver registered Jan 14 13:35:02.328501 kernel: registered taskstats version 1 Jan 14 13:35:02.328507 kernel: Loading compiled-in X.509 certificates Jan 14 13:35:02.328516 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 46cb4d1b22f3a5974766fe7d7b651e2f296d4fe0' Jan 14 13:35:02.328523 kernel: Key type .fscrypt registered Jan 14 13:35:02.328530 kernel: Key type fscrypt-provisioning registered Jan 14 13:35:02.328537 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:35:02.328544 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:35:02.328551 kernel: ima: No architecture policies found Jan 14 13:35:02.328558 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 14 13:35:02.328565 kernel: clk: Disabling unused clocks Jan 14 13:35:02.328572 kernel: Freeing unused kernel memory: 39936K Jan 14 13:35:02.328580 kernel: Run /init as init process Jan 14 13:35:02.328587 kernel: with arguments: Jan 14 13:35:02.328594 kernel: /init Jan 14 13:35:02.328601 kernel: with environment: Jan 14 13:35:02.328608 kernel: HOME=/ Jan 14 13:35:02.328615 kernel: TERM=linux Jan 14 13:35:02.328622 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 13:35:02.328630 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:35:02.328641 systemd[1]: Detected virtualization microsoft. Jan 14 13:35:02.328649 systemd[1]: Detected architecture arm64. Jan 14 13:35:02.328656 systemd[1]: Running in initrd. Jan 14 13:35:02.328664 systemd[1]: No hostname configured, using default hostname. Jan 14 13:35:02.328671 systemd[1]: Hostname set to . Jan 14 13:35:02.328680 systemd[1]: Initializing machine ID from random generator. Jan 14 13:35:02.328687 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:35:02.328696 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:35:02.328705 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:35:02.328714 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:35:02.328722 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:35:02.328729 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:35:02.328738 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:35:02.328747 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 13:35:02.328757 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 13:35:02.328765 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:35:02.328773 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:35:02.328780 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:35:02.328788 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:35:02.328796 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:35:02.328804 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:35:02.328811 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:35:02.328819 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:35:02.328829 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:35:02.330909 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:35:02.330930 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:35:02.330939 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:35:02.330947 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:35:02.330955 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:35:02.330964 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:35:02.330972 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:35:02.330985 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:35:02.330993 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:35:02.331001 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:35:02.331040 systemd-journald[218]: Collecting audit messages is disabled. Jan 14 13:35:02.331063 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:35:02.331071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:35:02.331080 systemd-journald[218]: Journal started Jan 14 13:35:02.331102 systemd-journald[218]: Runtime Journal (/run/log/journal/0b6c9f7081fc49ca8dbfe45cf667486b) is 8.0M, max 78.5M, 70.5M free. Jan 14 13:35:02.337867 systemd-modules-load[219]: Inserted module 'overlay' Jan 14 13:35:02.372614 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:35:02.372668 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:35:02.378612 kernel: Bridge firewalling registered Jan 14 13:35:02.382049 systemd-modules-load[219]: Inserted module 'br_netfilter' Jan 14 13:35:02.387092 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:35:02.393571 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:35:02.406965 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:35:02.418649 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:35:02.429293 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:35:02.450103 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:35:02.458997 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:35:02.487009 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:35:02.500018 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:35:02.515865 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:35:02.538523 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:35:02.544727 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:35:02.571332 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:35:02.580016 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:35:02.593985 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:35:02.606802 dracut-cmdline[251]: dracut-dracut-053 Jan 14 13:35:02.622034 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 14 13:35:02.619729 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:35:02.674031 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:35:02.708570 systemd-resolved[272]: Positive Trust Anchors: Jan 14 13:35:02.708594 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:35:02.708625 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:35:02.710710 systemd-resolved[272]: Defaulting to hostname 'linux'. Jan 14 13:35:02.711596 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:35:02.727978 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:35:02.823890 kernel: SCSI subsystem initialized Jan 14 13:35:02.831859 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:35:02.842867 kernel: iscsi: registered transport (tcp) Jan 14 13:35:02.859929 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:35:02.859996 kernel: QLogic iSCSI HBA Driver Jan 14 13:35:02.892092 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:35:02.906139 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:35:02.938722 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:35:02.938768 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:35:02.945314 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 13:35:03.004868 kernel: raid6: neonx8 gen() 15774 MB/s Jan 14 13:35:03.013859 kernel: raid6: neonx4 gen() 15826 MB/s Jan 14 13:35:03.033849 kernel: raid6: neonx2 gen() 13201 MB/s Jan 14 13:35:03.054854 kernel: raid6: neonx1 gen() 10535 MB/s Jan 14 13:35:03.074848 kernel: raid6: int64x8 gen() 6795 MB/s Jan 14 13:35:03.094850 kernel: raid6: int64x4 gen() 7354 MB/s Jan 14 13:35:03.115849 kernel: raid6: int64x2 gen() 6111 MB/s Jan 14 13:35:03.139644 kernel: raid6: int64x1 gen() 5058 MB/s Jan 14 13:35:03.139657 kernel: raid6: using algorithm neonx4 gen() 15826 MB/s Jan 14 13:35:03.164396 kernel: raid6: .... xor() 12432 MB/s, rmw enabled Jan 14 13:35:03.164417 kernel: raid6: using neon recovery algorithm Jan 14 13:35:03.176807 kernel: xor: measuring software checksum speed Jan 14 13:35:03.176821 kernel: 8regs : 21641 MB/sec Jan 14 13:35:03.180281 kernel: 32regs : 21664 MB/sec Jan 14 13:35:03.183890 kernel: arm64_neon : 27936 MB/sec Jan 14 13:35:03.187998 kernel: xor: using function: arm64_neon (27936 MB/sec) Jan 14 13:35:03.237862 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:35:03.248196 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:35:03.264000 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:35:03.286598 systemd-udevd[438]: Using default interface naming scheme 'v255'. Jan 14 13:35:03.292266 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:35:03.315131 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:35:03.331655 dracut-pre-trigger[457]: rd.md=0: removing MD RAID activation Jan 14 13:35:03.358634 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:35:03.373982 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:35:03.413116 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:35:03.436043 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:35:03.461710 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:35:03.473531 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:35:03.498939 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:35:03.515685 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:35:03.537459 kernel: hv_vmbus: Vmbus version:5.3 Jan 14 13:35:03.541067 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:35:03.577713 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 13:35:03.577735 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 13:35:03.577744 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 13:35:03.577753 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 14 13:35:03.583983 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:35:03.628750 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 13:35:03.628770 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 13:35:03.628902 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 14 13:35:03.628914 kernel: PTP clock support registered Jan 14 13:35:03.628923 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 13:35:03.621576 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:35:03.637907 kernel: scsi host0: storvsc_host_t Jan 14 13:35:03.621752 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:35:03.666893 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 13:35:03.666964 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 13:35:03.666978 kernel: scsi host1: storvsc_host_t Jan 14 13:35:03.663639 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:35:03.673938 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:35:03.706603 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 13:35:03.674241 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:35:03.687794 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:35:03.720305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:35:03.749125 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 13:35:03.749146 kernel: hv_vmbus: registering driver hv_utils Jan 14 13:35:03.757879 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 13:35:03.757927 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 13:35:03.757938 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 13:35:03.560331 systemd-resolved[272]: Clock change detected. Flushing caches. Jan 14 13:35:03.603641 kernel: hv_netvsc 000d3afb-f71d-000d-3afb-f71d000d3afb eth0: VF slot 1 added Jan 14 13:35:03.603771 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 13:35:03.603869 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:35:03.603880 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 13:35:03.603966 systemd-journald[218]: Time jumped backwards, rotating. Jan 14 13:35:03.585085 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:35:03.616551 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:35:03.631532 kernel: hv_vmbus: registering driver hv_pci Jan 14 13:35:03.645191 kernel: hv_pci fb0c8227-4fdc-4212-a89d-8a3b690ea318: PCI VMBus probing: Using version 0x10004 Jan 14 13:35:03.750201 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 13:35:03.750339 kernel: hv_pci fb0c8227-4fdc-4212-a89d-8a3b690ea318: PCI host bridge to bus 4fdc:00 Jan 14 13:35:03.750445 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 13:35:03.750546 kernel: pci_bus 4fdc:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 14 13:35:03.750650 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 13:35:03.750736 kernel: pci_bus 4fdc:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 13:35:03.750809 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 13:35:03.750888 kernel: pci 4fdc:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 14 13:35:03.750993 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 13:35:03.751075 kernel: pci 4fdc:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 14 13:35:03.751154 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:35:03.751163 kernel: pci 4fdc:00:02.0: enabling Extended Tags Jan 14 13:35:03.751241 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 13:35:03.751321 kernel: pci 4fdc:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4fdc:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 14 13:35:03.751557 kernel: pci_bus 4fdc:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 13:35:03.751638 kernel: pci 4fdc:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 14 13:35:03.708614 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:35:03.795437 kernel: mlx5_core 4fdc:00:02.0: enabling device (0000 -> 0002) Jan 14 13:35:04.010913 kernel: mlx5_core 4fdc:00:02.0: firmware version: 16.30.1284 Jan 14 13:35:04.011055 kernel: hv_netvsc 000d3afb-f71d-000d-3afb-f71d000d3afb eth0: VF registering: eth1 Jan 14 13:35:04.011148 kernel: mlx5_core 4fdc:00:02.0 eth1: joined to eth0 Jan 14 13:35:04.011237 kernel: mlx5_core 4fdc:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 14 13:35:04.018388 kernel: mlx5_core 4fdc:00:02.0 enP20444s1: renamed from eth1 Jan 14 13:35:04.269251 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 13:35:04.391130 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 13:35:04.412370 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (507) Jan 14 13:35:04.424407 kernel: BTRFS: device fsid 2be7cc1c-29d4-4496-b29b-8561323213d2 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (497) Jan 14 13:35:04.434727 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:35:04.446566 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 13:35:04.455572 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 13:35:04.491500 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:35:04.514392 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:35:04.521362 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:35:05.531423 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:35:05.531847 disk-uuid[607]: The operation has completed successfully. Jan 14 13:35:05.590380 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:35:05.590476 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:35:05.617487 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 13:35:05.631169 sh[693]: Success Jan 14 13:35:05.661420 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 14 13:35:05.866591 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:35:05.885028 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 13:35:05.894889 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 13:35:05.925228 kernel: BTRFS info (device dm-0): first mount of filesystem 2be7cc1c-29d4-4496-b29b-8561323213d2 Jan 14 13:35:05.925267 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 14 13:35:05.925277 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 13:35:05.937382 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:35:05.941536 kernel: BTRFS info (device dm-0): using free space tree Jan 14 13:35:06.227780 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 13:35:06.233681 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:35:06.254620 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:35:06.273642 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:35:06.299297 kernel: BTRFS info (device sda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 14 13:35:06.299318 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 13:35:06.299327 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:35:06.323975 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:35:06.330667 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 13:35:06.344147 kernel: BTRFS info (device sda6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 14 13:35:06.351009 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:35:06.368109 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:35:06.421163 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:35:06.439498 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:35:06.467310 systemd-networkd[877]: lo: Link UP Jan 14 13:35:06.471102 systemd-networkd[877]: lo: Gained carrier Jan 14 13:35:06.472811 systemd-networkd[877]: Enumeration completed Jan 14 13:35:06.476402 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:35:06.482696 systemd[1]: Reached target network.target - Network. Jan 14 13:35:06.486626 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:35:06.486629 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:35:06.569370 kernel: mlx5_core 4fdc:00:02.0 enP20444s1: Link up Jan 14 13:35:06.608409 kernel: hv_netvsc 000d3afb-f71d-000d-3afb-f71d000d3afb eth0: Data path switched to VF: enP20444s1 Jan 14 13:35:06.608115 systemd-networkd[877]: enP20444s1: Link UP Jan 14 13:35:06.608199 systemd-networkd[877]: eth0: Link UP Jan 14 13:35:06.608301 systemd-networkd[877]: eth0: Gained carrier Jan 14 13:35:06.608311 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:35:06.617598 systemd-networkd[877]: enP20444s1: Gained carrier Jan 14 13:35:06.648403 systemd-networkd[877]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 14 13:35:07.380667 ignition[808]: Ignition 2.20.0 Jan 14 13:35:07.380679 ignition[808]: Stage: fetch-offline Jan 14 13:35:07.385256 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:35:07.380715 ignition[808]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:07.380723 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:35:07.380806 ignition[808]: parsed url from cmdline: "" Jan 14 13:35:07.380809 ignition[808]: no config URL provided Jan 14 13:35:07.380813 ignition[808]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:35:07.413620 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:35:07.380819 ignition[808]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:35:07.380823 ignition[808]: failed to fetch config: resource requires networking Jan 14 13:35:07.380993 ignition[808]: Ignition finished successfully Jan 14 13:35:07.444713 ignition[886]: Ignition 2.20.0 Jan 14 13:35:07.444722 ignition[886]: Stage: fetch Jan 14 13:35:07.444970 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:07.444980 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:35:07.445449 ignition[886]: parsed url from cmdline: "" Jan 14 13:35:07.445456 ignition[886]: no config URL provided Jan 14 13:35:07.445462 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:35:07.445471 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:35:07.445505 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 13:35:07.564043 ignition[886]: GET result: OK Jan 14 13:35:07.564125 ignition[886]: config has been read from IMDS userdata Jan 14 13:35:07.564172 ignition[886]: parsing config with SHA512: 97aeab3800aace747c7371d0042c7940e71f3d13bb1c1138345cd2de1e041bb53a6b1a091b484d4436943687bc70c2bbc889fda5d020aedfd24f3575dd9fac31 Jan 14 13:35:07.568876 unknown[886]: fetched base config from "system" Jan 14 13:35:07.569347 ignition[886]: fetch: fetch complete Jan 14 13:35:07.568883 unknown[886]: fetched base config from "system" Jan 14 13:35:07.569368 ignition[886]: fetch: fetch passed Jan 14 13:35:07.568888 unknown[886]: fetched user config from "azure" Jan 14 13:35:07.569410 ignition[886]: Ignition finished successfully Jan 14 13:35:07.574088 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:35:07.606493 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:35:07.623597 ignition[892]: Ignition 2.20.0 Jan 14 13:35:07.623611 ignition[892]: Stage: kargs Jan 14 13:35:07.627595 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:35:07.623775 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:07.623784 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:35:07.624706 ignition[892]: kargs: kargs passed Jan 14 13:35:07.624749 ignition[892]: Ignition finished successfully Jan 14 13:35:07.654603 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:35:07.676010 ignition[898]: Ignition 2.20.0 Jan 14 13:35:07.676020 ignition[898]: Stage: disks Jan 14 13:35:07.681397 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:35:07.676172 ignition[898]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:07.690097 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:35:07.676180 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:35:07.696421 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:35:07.677056 ignition[898]: disks: disks passed Jan 14 13:35:07.707970 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:35:07.677097 ignition[898]: Ignition finished successfully Jan 14 13:35:07.717992 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:35:07.729887 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:35:07.756562 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:35:07.972145 systemd-fsck[906]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 13:35:07.981730 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:35:08.002485 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:35:08.057374 kernel: EXT4-fs (sda9): mounted filesystem f9a95e53-2d63-4443-b523-cb2108fb48f6 r/w with ordered data mode. Quota mode: none. Jan 14 13:35:08.058452 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:35:08.063276 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:35:08.079493 systemd-networkd[877]: eth0: Gained IPv6LL Jan 14 13:35:08.118483 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:35:08.131015 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:35:08.147374 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (917) Jan 14 13:35:08.161828 kernel: BTRFS info (device sda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 14 13:35:08.161866 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 13:35:08.165810 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:35:08.166592 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 13:35:08.173084 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:35:08.210259 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:35:08.173115 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:35:08.180684 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:35:08.199540 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:35:08.208317 systemd-networkd[877]: enP20444s1: Gained IPv6LL Jan 14 13:35:08.223267 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:35:08.738161 coreos-metadata[919]: Jan 14 13:35:08.737 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:35:08.748392 coreos-metadata[919]: Jan 14 13:35:08.748 INFO Fetch successful Jan 14 13:35:08.753754 coreos-metadata[919]: Jan 14 13:35:08.748 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:35:08.765519 coreos-metadata[919]: Jan 14 13:35:08.765 INFO Fetch successful Jan 14 13:35:08.780389 coreos-metadata[919]: Jan 14 13:35:08.780 INFO wrote hostname ci-4186.1.0-a-8a230934f7 to /sysroot/etc/hostname Jan 14 13:35:08.791017 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:35:08.987036 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 13:35:09.062338 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Jan 14 13:35:09.070841 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 13:35:09.079275 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 13:35:10.032806 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:35:10.048523 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:35:10.055831 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:35:10.082310 kernel: BTRFS info (device sda6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 14 13:35:10.076541 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:35:10.102380 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:35:10.119660 ignition[1042]: INFO : Ignition 2.20.0 Jan 14 13:35:10.119660 ignition[1042]: INFO : Stage: mount Jan 14 13:35:10.119660 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:10.119660 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:35:10.119660 ignition[1042]: INFO : mount: mount passed Jan 14 13:35:10.119660 ignition[1042]: INFO : Ignition finished successfully Jan 14 13:35:10.119983 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:35:10.147538 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:35:10.170590 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:35:10.201387 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1052) Jan 14 13:35:10.201427 kernel: BTRFS info (device sda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 14 13:35:10.207501 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 13:35:10.211760 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:35:10.218382 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:35:10.219385 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:35:10.241542 ignition[1070]: INFO : Ignition 2.20.0 Jan 14 13:35:10.241542 ignition[1070]: INFO : Stage: files Jan 14 13:35:10.249668 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:10.249668 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:35:10.249668 ignition[1070]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:35:10.268046 ignition[1070]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:35:10.268046 ignition[1070]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:35:10.345020 ignition[1070]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:35:10.353007 ignition[1070]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:35:10.353007 ignition[1070]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:35:10.345424 unknown[1070]: wrote ssh authorized keys file for user: core Jan 14 13:35:10.373135 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 14 13:35:10.373135 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 14 13:35:10.558390 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 13:35:10.679222 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 14 13:35:10.679222 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:35:10.700075 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 14 13:35:11.006500 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 14 13:35:11.078900 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:35:11.078900 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 14 13:35:11.098813 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 14 13:35:11.560693 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 14 13:35:12.326430 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 14 13:35:12.326430 ignition[1070]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 14 13:35:12.384986 ignition[1070]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:35:12.400447 ignition[1070]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:35:12.400447 ignition[1070]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 14 13:35:12.400447 ignition[1070]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 14 13:35:12.400447 ignition[1070]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 13:35:12.400447 ignition[1070]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:35:12.400447 ignition[1070]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:35:12.400447 ignition[1070]: INFO : files: files passed Jan 14 13:35:12.400447 ignition[1070]: INFO : Ignition finished successfully Jan 14 13:35:12.404791 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:35:12.444033 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:35:12.458533 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:35:12.542458 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:35:12.542458 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:35:12.486975 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:35:12.579226 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:35:12.487063 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:35:12.497065 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:35:12.513525 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:35:12.542616 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:35:12.591010 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:35:12.591113 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:35:12.604960 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:35:12.618634 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:35:12.631586 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:35:12.651572 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:35:12.687622 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:35:12.710602 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:35:12.733031 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:35:12.733132 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:35:12.746676 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:35:12.759995 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:35:12.772914 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:35:12.784616 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:35:12.784707 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:35:12.802471 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:35:12.815688 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:35:12.827791 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:35:12.839015 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:35:12.852580 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:35:12.865121 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:35:12.877731 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:35:12.891121 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:35:12.904486 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:35:12.915841 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:35:12.926396 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:35:12.926470 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:35:12.942469 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:35:12.954863 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:35:12.967757 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:35:12.974072 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:35:12.981699 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:35:12.981769 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:35:13.001517 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:35:13.001578 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:35:13.014597 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:35:13.014642 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:35:13.025922 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 13:35:13.025978 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:35:13.092186 ignition[1122]: INFO : Ignition 2.20.0 Jan 14 13:35:13.092186 ignition[1122]: INFO : Stage: umount Jan 14 13:35:13.092186 ignition[1122]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:13.092186 ignition[1122]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:35:13.053555 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:35:13.149828 ignition[1122]: INFO : umount: umount passed Jan 14 13:35:13.149828 ignition[1122]: INFO : Ignition finished successfully Jan 14 13:35:13.071797 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:35:13.085066 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:35:13.085128 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:35:13.097210 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:35:13.097265 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:35:13.115107 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:35:13.115196 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:35:13.131362 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:35:13.131424 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:35:13.143710 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:35:13.143774 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:35:13.150009 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:35:13.150072 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:35:13.161202 systemd[1]: Stopped target network.target - Network. Jan 14 13:35:13.172550 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:35:13.172613 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:35:13.184450 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:35:13.196304 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:35:13.202796 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:35:13.210604 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:35:13.222333 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:35:13.235197 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:35:13.235238 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:35:13.246560 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:35:13.246598 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:35:13.258240 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:35:13.258295 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:35:13.269265 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:35:13.269322 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:35:13.281715 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:35:13.299321 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:35:13.310400 systemd-networkd[877]: eth0: DHCPv6 lease lost Jan 14 13:35:13.312093 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:35:13.316659 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:35:13.316794 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:35:13.330082 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:35:13.330171 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:35:13.556037 kernel: hv_netvsc 000d3afb-f71d-000d-3afb-f71d000d3afb eth0: Data path switched from VF: enP20444s1 Jan 14 13:35:13.344230 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:35:13.344290 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:35:13.384557 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:35:13.395156 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:35:13.395229 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:35:13.407228 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:35:13.407279 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:35:13.419031 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:35:13.419077 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:35:13.431579 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:35:13.431622 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:35:13.445029 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:35:13.490195 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:35:13.490385 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:35:13.501625 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:35:13.501668 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:35:13.517901 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:35:13.517934 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:35:13.539042 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:35:13.539099 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:35:13.556114 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:35:13.556162 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:35:13.567701 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:35:13.567749 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:35:13.603868 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:35:13.610369 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:35:13.610436 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:35:13.623599 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 13:35:13.623651 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:35:13.639692 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:35:13.846792 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jan 14 13:35:13.639742 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:35:13.653006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:35:13.653053 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:35:13.671891 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:35:13.671993 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:35:13.684648 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:35:13.684727 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:35:13.702899 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:35:13.702987 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:35:13.723393 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:35:13.735836 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:35:13.735936 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:35:13.770620 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:35:13.789615 systemd[1]: Switching root. Jan 14 13:35:13.929886 systemd-journald[218]: Journal stopped Jan 14 13:35:18.586453 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 13:35:18.586475 kernel: SELinux: policy capability open_perms=1 Jan 14 13:35:18.586485 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 13:35:18.586493 kernel: SELinux: policy capability always_check_network=0 Jan 14 13:35:18.586503 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 13:35:18.586511 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 13:35:18.586519 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 13:35:18.586527 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 13:35:18.586534 kernel: audit: type=1403 audit(1736861715.139:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 14 13:35:18.586544 systemd[1]: Successfully loaded SELinux policy in 182.320ms. Jan 14 13:35:18.586555 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.105ms. Jan 14 13:35:18.586564 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:35:18.586573 systemd[1]: Detected virtualization microsoft. Jan 14 13:35:18.586581 systemd[1]: Detected architecture arm64. Jan 14 13:35:18.586590 systemd[1]: Detected first boot. Jan 14 13:35:18.586600 systemd[1]: Hostname set to . Jan 14 13:35:18.586614 systemd[1]: Initializing machine ID from random generator. Jan 14 13:35:18.586622 zram_generator::config[1162]: No configuration found. Jan 14 13:35:18.586632 systemd[1]: Populated /etc with preset unit settings. Jan 14 13:35:18.586640 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 13:35:18.586649 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 13:35:18.586658 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 13:35:18.586668 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 13:35:18.586677 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 13:35:18.586686 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 13:35:18.586695 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 13:35:18.586704 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 13:35:18.586713 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 13:35:18.586721 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 13:35:18.586732 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 13:35:18.586740 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:35:18.586749 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:35:18.586758 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 13:35:18.586767 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 13:35:18.586775 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 13:35:18.586784 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:35:18.586793 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 14 13:35:18.586804 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:35:18.586813 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 13:35:18.586822 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 13:35:18.586833 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 13:35:18.586842 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 13:35:18.586851 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:35:18.586860 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:35:18.586869 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:35:18.586879 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:35:18.586888 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 13:35:18.586897 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 13:35:18.586906 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:35:18.586915 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:35:18.586925 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:35:18.586935 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 13:35:18.586944 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 13:35:18.586953 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 13:35:18.586962 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 13:35:18.586971 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 13:35:18.586980 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 13:35:18.586990 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 13:35:18.587002 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 13:35:18.587011 systemd[1]: Reached target machines.target - Containers. Jan 14 13:35:18.587020 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 13:35:18.587029 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:35:18.587038 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:35:18.587048 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 13:35:18.587057 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:35:18.587066 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:35:18.587076 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:35:18.587085 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 13:35:18.587094 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:35:18.587104 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 13:35:18.587113 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 13:35:18.587122 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 13:35:18.587131 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 13:35:18.587140 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 13:35:18.587151 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:35:18.587160 kernel: fuse: init (API version 7.39) Jan 14 13:35:18.587168 kernel: loop: module loaded Jan 14 13:35:18.587177 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:35:18.587186 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 13:35:18.587196 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 13:35:18.587220 systemd-journald[1265]: Collecting audit messages is disabled. Jan 14 13:35:18.587240 systemd-journald[1265]: Journal started Jan 14 13:35:18.587264 systemd-journald[1265]: Runtime Journal (/run/log/journal/2669e9a0731d46b9b775151b92205e3e) is 8.0M, max 78.5M, 70.5M free. Jan 14 13:35:17.632259 systemd[1]: Queued start job for default target multi-user.target. Jan 14 13:35:17.718236 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 14 13:35:17.718602 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 13:35:17.718882 systemd[1]: systemd-journald.service: Consumed 3.326s CPU time. Jan 14 13:35:18.603542 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:35:18.617382 systemd[1]: verity-setup.service: Deactivated successfully. Jan 14 13:35:18.617437 kernel: ACPI: bus type drm_connector registered Jan 14 13:35:18.617450 systemd[1]: Stopped verity-setup.service. Jan 14 13:35:18.641138 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:35:18.641923 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 13:35:18.649834 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 13:35:18.656195 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 13:35:18.661715 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 13:35:18.667874 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 13:35:18.674102 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 13:35:18.680803 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 13:35:18.687493 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:35:18.694856 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 13:35:18.694994 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 13:35:18.701729 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:35:18.701856 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:35:18.708769 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:35:18.708900 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:35:18.715153 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:35:18.715280 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:35:18.722478 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 13:35:18.722615 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 13:35:18.728827 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:35:18.728951 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:35:18.735289 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:35:18.742657 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 13:35:18.749841 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 13:35:18.756942 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:35:18.774635 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 13:35:18.785435 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 13:35:18.794158 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 13:35:18.800245 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 13:35:18.800280 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:35:18.807283 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 14 13:35:18.815106 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 13:35:18.823627 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 13:35:18.829212 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:35:18.864506 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 13:35:18.871723 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 13:35:18.877980 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:35:18.879011 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 13:35:18.885152 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:35:18.886542 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:35:18.894483 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 13:35:18.903690 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:35:18.915551 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 14 13:35:18.926732 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 13:35:18.935183 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 13:35:18.945042 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 13:35:18.946546 systemd-journald[1265]: Time spent on flushing to /var/log/journal/2669e9a0731d46b9b775151b92205e3e is 74.176ms for 906 entries. Jan 14 13:35:18.946546 systemd-journald[1265]: System Journal (/var/log/journal/2669e9a0731d46b9b775151b92205e3e) is 11.8M, max 2.6G, 2.6G free. Jan 14 13:35:19.096692 systemd-journald[1265]: Received client request to flush runtime journal. Jan 14 13:35:19.096747 kernel: loop0: detected capacity change from 0 to 194096 Jan 14 13:35:19.096767 systemd-journald[1265]: /var/log/journal/2669e9a0731d46b9b775151b92205e3e/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 14 13:35:19.096789 systemd-journald[1265]: Rotating system journal. Jan 14 13:35:19.096807 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 13:35:18.958656 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 13:35:18.976910 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 13:35:19.010682 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 14 13:35:19.018388 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:35:19.026615 udevadm[1299]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 14 13:35:19.099404 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 13:35:19.115975 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 13:35:19.117409 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 14 13:35:19.132501 kernel: loop1: detected capacity change from 0 to 116784 Jan 14 13:35:19.137527 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Jan 14 13:35:19.137871 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Jan 14 13:35:19.143403 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:35:19.159629 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 13:35:19.375723 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 13:35:19.387514 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:35:19.404020 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Jan 14 13:35:19.404043 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Jan 14 13:35:19.407617 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:35:19.538383 kernel: loop2: detected capacity change from 0 to 28752 Jan 14 13:35:19.889694 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 13:35:19.901488 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:35:19.914396 kernel: loop3: detected capacity change from 0 to 113552 Jan 14 13:35:19.925686 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Jan 14 13:35:19.984668 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:35:20.000994 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:35:20.036919 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 14 13:35:20.067543 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 13:35:20.149915 kernel: hv_vmbus: registering driver hyperv_fb Jan 14 13:35:20.150027 kernel: hv_vmbus: registering driver hv_balloon Jan 14 13:35:20.145900 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 13:35:20.160938 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 14 13:35:20.169887 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 14 13:35:20.169968 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 14 13:35:20.169993 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 14 13:35:20.184259 kernel: Console: switching to colour dummy device 80x25 Jan 14 13:35:20.186446 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:35:20.217686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:35:20.228183 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 13:35:20.230651 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:35:20.232413 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:35:20.258736 kernel: loop4: detected capacity change from 0 to 194096 Jan 14 13:35:20.262517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:35:20.273440 kernel: loop5: detected capacity change from 0 to 116784 Jan 14 13:35:20.278422 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:35:20.279053 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:35:20.297539 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:35:20.300366 kernel: loop6: detected capacity change from 0 to 28752 Jan 14 13:35:20.311231 systemd-networkd[1340]: lo: Link UP Jan 14 13:35:20.311236 systemd-networkd[1340]: lo: Gained carrier Jan 14 13:35:20.315614 systemd-networkd[1340]: Enumeration completed Jan 14 13:35:20.315717 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:35:20.315973 systemd-networkd[1340]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:35:20.315976 systemd-networkd[1340]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:35:20.324794 kernel: loop7: detected capacity change from 0 to 113552 Jan 14 13:35:20.338074 (sd-merge)[1385]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 14 13:35:20.340394 (sd-merge)[1385]: Merged extensions into '/usr'. Jan 14 13:35:20.344375 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1347) Jan 14 13:35:20.344950 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 13:35:20.361983 systemd[1]: Reloading requested from client PID 1296 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 13:35:20.362003 systemd[1]: Reloading... Jan 14 13:35:20.395374 kernel: mlx5_core 4fdc:00:02.0 enP20444s1: Link up Jan 14 13:35:20.423615 kernel: hv_netvsc 000d3afb-f71d-000d-3afb-f71d000d3afb eth0: Data path switched to VF: enP20444s1 Jan 14 13:35:20.423887 systemd-networkd[1340]: enP20444s1: Link UP Jan 14 13:35:20.423982 systemd-networkd[1340]: eth0: Link UP Jan 14 13:35:20.423984 systemd-networkd[1340]: eth0: Gained carrier Jan 14 13:35:20.423998 systemd-networkd[1340]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:35:20.428273 systemd-networkd[1340]: enP20444s1: Gained carrier Jan 14 13:35:20.433438 systemd-networkd[1340]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 14 13:35:20.463413 zram_generator::config[1473]: No configuration found. Jan 14 13:35:20.588135 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:35:20.662738 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:35:20.669975 systemd[1]: Reloading finished in 307 ms. Jan 14 13:35:20.696383 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 13:35:20.704748 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 14 13:35:20.730513 systemd[1]: Starting ensure-sysext.service... Jan 14 13:35:20.736534 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 14 13:35:20.744948 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 13:35:20.753506 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:35:20.763363 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:35:20.773582 systemd[1]: Reloading requested from client PID 1530 ('systemctl') (unit ensure-sysext.service)... Jan 14 13:35:20.773599 systemd[1]: Reloading... Jan 14 13:35:20.800269 systemd-tmpfiles[1533]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 13:35:20.801506 systemd-tmpfiles[1533]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 14 13:35:20.802265 systemd-tmpfiles[1533]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 14 13:35:20.802605 systemd-tmpfiles[1533]: ACLs are not supported, ignoring. Jan 14 13:35:20.802741 systemd-tmpfiles[1533]: ACLs are not supported, ignoring. Jan 14 13:35:20.810438 lvm[1531]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 13:35:20.823108 systemd-tmpfiles[1533]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:35:20.823125 systemd-tmpfiles[1533]: Skipping /boot Jan 14 13:35:20.837888 systemd-tmpfiles[1533]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:35:20.837906 systemd-tmpfiles[1533]: Skipping /boot Jan 14 13:35:20.872379 zram_generator::config[1568]: No configuration found. Jan 14 13:35:20.973411 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:35:21.051626 systemd[1]: Reloading finished in 277 ms. Jan 14 13:35:21.072837 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 14 13:35:21.080516 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 13:35:21.088559 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:35:21.101889 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:35:21.115589 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:35:21.157677 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 13:35:21.168635 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 14 13:35:21.176685 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 13:35:21.185388 lvm[1639]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 13:35:21.188431 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:35:21.197439 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 13:35:21.211171 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:35:21.213728 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:35:21.230830 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:35:21.249599 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:35:21.251749 augenrules[1656]: No rules Jan 14 13:35:21.256029 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:35:21.257016 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:35:21.257190 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:35:21.268026 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 14 13:35:21.276411 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 13:35:21.284225 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:35:21.284404 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:35:21.291579 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:35:21.291699 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:35:21.300963 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:35:21.301115 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:35:21.314453 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:35:21.324623 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:35:21.325708 systemd-resolved[1645]: Positive Trust Anchors: Jan 14 13:35:21.325718 systemd-resolved[1645]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:35:21.325749 systemd-resolved[1645]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:35:21.333313 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:35:21.341693 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:35:21.347455 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:35:21.348501 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:35:21.348661 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:35:21.355486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:35:21.355614 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:35:21.363112 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:35:21.363234 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:35:21.371503 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 13:35:21.381537 systemd-resolved[1645]: Using system hostname 'ci-4186.1.0-a-8a230934f7'. Jan 14 13:35:21.389567 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:35:21.395614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:35:21.398482 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:35:21.407554 augenrules[1674]: /sbin/augenrules: No change Jan 14 13:35:21.408598 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:35:21.419088 augenrules[1693]: No rules Jan 14 13:35:21.420482 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:35:21.440299 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:35:21.445961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:35:21.446146 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 13:35:21.453157 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:35:21.458547 systemd-networkd[1340]: eth0: Gained IPv6LL Jan 14 13:35:21.461302 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:35:21.461729 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:35:21.468048 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:35:21.468191 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:35:21.475098 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 13:35:21.483871 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:35:21.485417 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:35:21.492170 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:35:21.492364 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:35:21.499983 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:35:21.501393 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:35:21.510438 systemd[1]: Finished ensure-sysext.service. Jan 14 13:35:21.519548 systemd[1]: Reached target network.target - Network. Jan 14 13:35:21.524956 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 13:35:21.531803 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:35:21.538642 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:35:21.538822 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:35:22.223709 systemd-networkd[1340]: enP20444s1: Gained IPv6LL Jan 14 13:35:22.245300 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 13:35:22.252766 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 13:35:24.700561 ldconfig[1291]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 13:35:24.748226 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 13:35:24.761496 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 13:35:24.773614 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 13:35:24.780922 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:35:24.786776 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 13:35:24.793683 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 13:35:24.800668 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 13:35:24.806639 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 13:35:24.813611 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 13:35:24.821264 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 13:35:24.821294 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:35:24.826406 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:35:24.832437 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 13:35:24.840310 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 13:35:24.849809 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 13:35:24.856120 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 13:35:24.862063 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:35:24.867248 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:35:24.872421 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:35:24.872448 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:35:24.881467 systemd[1]: Starting chronyd.service - NTP client/server... Jan 14 13:35:24.890482 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 13:35:24.900509 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 13:35:24.907031 (chronyd)[1713]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 14 13:35:24.910329 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 13:35:24.917169 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 13:35:24.924554 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 13:35:24.926076 jq[1720]: false Jan 14 13:35:24.933793 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 13:35:24.933931 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 14 13:35:24.945964 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 14 13:35:24.952927 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 14 13:35:24.957711 chronyd[1726]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 14 13:35:24.953912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:35:24.961639 KVP[1722]: KVP starting; pid is:1722 Jan 14 13:35:24.964220 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 13:35:24.970673 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 13:35:24.979016 KVP[1722]: KVP LIC Version: 3.1 Jan 14 13:35:24.979407 kernel: hv_utils: KVP IC version 4.0 Jan 14 13:35:24.979638 chronyd[1726]: Timezone right/UTC failed leap second check, ignoring Jan 14 13:35:24.979849 chronyd[1726]: Loaded seccomp filter (level 2) Jan 14 13:35:24.984278 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 13:35:24.991005 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 13:35:25.001519 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 13:35:25.013641 extend-filesystems[1721]: Found loop4 Jan 14 13:35:25.020050 extend-filesystems[1721]: Found loop5 Jan 14 13:35:25.020050 extend-filesystems[1721]: Found loop6 Jan 14 13:35:25.020050 extend-filesystems[1721]: Found loop7 Jan 14 13:35:25.020050 extend-filesystems[1721]: Found sda Jan 14 13:35:25.020050 extend-filesystems[1721]: Found sda1 Jan 14 13:35:25.020050 extend-filesystems[1721]: Found sda2 Jan 14 13:35:25.020050 extend-filesystems[1721]: Found sda3 Jan 14 13:35:25.020050 extend-filesystems[1721]: Found usr Jan 14 13:35:25.020050 extend-filesystems[1721]: Found sda4 Jan 14 13:35:25.020050 extend-filesystems[1721]: Found sda6 Jan 14 13:35:25.020050 extend-filesystems[1721]: Found sda7 Jan 14 13:35:25.020050 extend-filesystems[1721]: Found sda9 Jan 14 13:35:25.020050 extend-filesystems[1721]: Checking size of /dev/sda9 Jan 14 13:35:25.014258 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 13:35:25.134700 dbus-daemon[1716]: [system] SELinux support is enabled Jan 14 13:35:25.169318 extend-filesystems[1721]: Old size kept for /dev/sda9 Jan 14 13:35:25.169318 extend-filesystems[1721]: Found sr0 Jan 14 13:35:25.030867 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 13:35:25.031294 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 13:35:25.217041 update_engine[1744]: I20250114 13:35:25.133875 1744 main.cc:92] Flatcar Update Engine starting Jan 14 13:35:25.217041 update_engine[1744]: I20250114 13:35:25.137094 1744 update_check_scheduler.cc:74] Next update check in 5m3s Jan 14 13:35:25.047532 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 13:35:25.217361 jq[1746]: true Jan 14 13:35:25.055480 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 13:35:25.092237 systemd[1]: Started chronyd.service - NTP client/server. Jan 14 13:35:25.111794 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 13:35:25.112260 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 13:35:25.113747 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 13:35:25.115399 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 13:35:25.137966 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 13:35:25.160546 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 13:35:25.160737 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 13:35:25.165716 systemd-logind[1740]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 14 13:35:25.175507 systemd-logind[1740]: New seat seat0. Jan 14 13:35:25.186794 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 13:35:25.206928 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 13:35:25.225528 coreos-metadata[1715]: Jan 14 13:35:25.222 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:35:25.231951 coreos-metadata[1715]: Jan 14 13:35:25.230 INFO Fetch successful Jan 14 13:35:25.231951 coreos-metadata[1715]: Jan 14 13:35:25.231 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 14 13:35:25.231708 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 13:35:25.231939 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 13:35:25.240302 coreos-metadata[1715]: Jan 14 13:35:25.239 INFO Fetch successful Jan 14 13:35:25.241009 coreos-metadata[1715]: Jan 14 13:35:25.240 INFO Fetching http://168.63.129.16/machine/92eff7a6-19d4-40fd-9e4f-5cae236c6014/af85a8b3%2Db985%2D473b%2D91a1%2D27ac1a83a4ad.%5Fci%2D4186.1.0%2Da%2D8a230934f7?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 14 13:35:25.248264 coreos-metadata[1715]: Jan 14 13:35:25.243 INFO Fetch successful Jan 14 13:35:25.248264 coreos-metadata[1715]: Jan 14 13:35:25.245 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:35:25.254226 (ntainerd)[1768]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 14 13:35:25.261840 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 13:35:25.261880 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 13:35:25.265272 jq[1767]: true Jan 14 13:35:25.270247 coreos-metadata[1715]: Jan 14 13:35:25.269 INFO Fetch successful Jan 14 13:35:25.269646 dbus-daemon[1716]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 14 13:35:25.272273 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 13:35:25.272303 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 13:35:25.297711 systemd[1]: Started update-engine.service - Update Engine. Jan 14 13:35:25.338439 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1762) Jan 14 13:35:25.337693 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 13:35:25.440934 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 13:35:25.458056 tar[1766]: linux-arm64/helm Jan 14 13:35:25.468641 bash[1812]: Updated "/home/core/.ssh/authorized_keys" Jan 14 13:35:25.471412 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 13:35:25.503551 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 13:35:25.503876 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 13:35:25.836051 sshd_keygen[1745]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 13:35:25.852166 locksmithd[1800]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 13:35:25.883675 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 13:35:25.902739 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 13:35:25.931178 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 14 13:35:25.942161 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 13:35:25.942315 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 13:35:25.968670 containerd[1768]: time="2025-01-14T13:35:25.968586720Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 14 13:35:25.975977 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 14 13:35:25.999655 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 13:35:26.021186 containerd[1768]: time="2025-01-14T13:35:26.021140600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:35:26.031022 containerd[1768]: time="2025-01-14T13:35:26.029685720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:35:26.031022 containerd[1768]: time="2025-01-14T13:35:26.029729480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 14 13:35:26.031022 containerd[1768]: time="2025-01-14T13:35:26.029746320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 14 13:35:26.031022 containerd[1768]: time="2025-01-14T13:35:26.029892000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 14 13:35:26.031022 containerd[1768]: time="2025-01-14T13:35:26.029909760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 14 13:35:26.031022 containerd[1768]: time="2025-01-14T13:35:26.029979080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:35:26.031022 containerd[1768]: time="2025-01-14T13:35:26.029991160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:35:26.031022 containerd[1768]: time="2025-01-14T13:35:26.030139720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:35:26.031022 containerd[1768]: time="2025-01-14T13:35:26.030154360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 14 13:35:26.031022 containerd[1768]: time="2025-01-14T13:35:26.030167160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:35:26.031022 containerd[1768]: time="2025-01-14T13:35:26.030176840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 14 13:35:26.031311 containerd[1768]: time="2025-01-14T13:35:26.030245280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:35:26.031311 containerd[1768]: time="2025-01-14T13:35:26.030482040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:35:26.031311 containerd[1768]: time="2025-01-14T13:35:26.030579120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:35:26.031311 containerd[1768]: time="2025-01-14T13:35:26.030591960Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 14 13:35:26.031311 containerd[1768]: time="2025-01-14T13:35:26.030666520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 14 13:35:26.031311 containerd[1768]: time="2025-01-14T13:35:26.030706440Z" level=info msg="metadata content store policy set" policy=shared Jan 14 13:35:26.041078 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 13:35:26.053740 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 13:35:26.061158 containerd[1768]: time="2025-01-14T13:35:26.060570840Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 14 13:35:26.061158 containerd[1768]: time="2025-01-14T13:35:26.060637080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 14 13:35:26.061158 containerd[1768]: time="2025-01-14T13:35:26.060655960Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 14 13:35:26.061158 containerd[1768]: time="2025-01-14T13:35:26.060679400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 14 13:35:26.061158 containerd[1768]: time="2025-01-14T13:35:26.060696040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 14 13:35:26.061158 containerd[1768]: time="2025-01-14T13:35:26.060858760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 14 13:35:26.061158 containerd[1768]: time="2025-01-14T13:35:26.061102520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 14 13:35:26.061433 containerd[1768]: time="2025-01-14T13:35:26.061203320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 14 13:35:26.061433 containerd[1768]: time="2025-01-14T13:35:26.061221520Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 14 13:35:26.061433 containerd[1768]: time="2025-01-14T13:35:26.061235960Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 14 13:35:26.061433 containerd[1768]: time="2025-01-14T13:35:26.061249840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 14 13:35:26.061433 containerd[1768]: time="2025-01-14T13:35:26.061262520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 14 13:35:26.061433 containerd[1768]: time="2025-01-14T13:35:26.061275080Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 14 13:35:26.061433 containerd[1768]: time="2025-01-14T13:35:26.061288080Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 14 13:35:26.061433 containerd[1768]: time="2025-01-14T13:35:26.061301400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 14 13:35:26.061433 containerd[1768]: time="2025-01-14T13:35:26.061314480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 14 13:35:26.061433 containerd[1768]: time="2025-01-14T13:35:26.061326400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 14 13:35:26.061433 containerd[1768]: time="2025-01-14T13:35:26.061337560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 14 13:35:26.061433 containerd[1768]: time="2025-01-14T13:35:26.061379240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 14 13:35:26.061433 containerd[1768]: time="2025-01-14T13:35:26.061404400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 14 13:35:26.061433 containerd[1768]: time="2025-01-14T13:35:26.061417520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 14 13:35:26.061648 containerd[1768]: time="2025-01-14T13:35:26.061430000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 14 13:35:26.061648 containerd[1768]: time="2025-01-14T13:35:26.061442040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 14 13:35:26.061648 containerd[1768]: time="2025-01-14T13:35:26.061455840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 14 13:35:26.061648 containerd[1768]: time="2025-01-14T13:35:26.061469880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 14 13:35:26.061648 containerd[1768]: time="2025-01-14T13:35:26.061482160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 14 13:35:26.061648 containerd[1768]: time="2025-01-14T13:35:26.061494520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 14 13:35:26.061648 containerd[1768]: time="2025-01-14T13:35:26.061509320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 14 13:35:26.061648 containerd[1768]: time="2025-01-14T13:35:26.061520520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 14 13:35:26.061648 containerd[1768]: time="2025-01-14T13:35:26.061532040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 14 13:35:26.061648 containerd[1768]: time="2025-01-14T13:35:26.061547160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 14 13:35:26.061648 containerd[1768]: time="2025-01-14T13:35:26.061563640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 14 13:35:26.061648 containerd[1768]: time="2025-01-14T13:35:26.061583400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 14 13:35:26.061648 containerd[1768]: time="2025-01-14T13:35:26.061599840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 14 13:35:26.061648 containerd[1768]: time="2025-01-14T13:35:26.061611880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 14 13:35:26.061865 containerd[1768]: time="2025-01-14T13:35:26.061656760Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 14 13:35:26.061865 containerd[1768]: time="2025-01-14T13:35:26.061674680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 14 13:35:26.061865 containerd[1768]: time="2025-01-14T13:35:26.061686560Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 14 13:35:26.061865 containerd[1768]: time="2025-01-14T13:35:26.061698040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 14 13:35:26.061865 containerd[1768]: time="2025-01-14T13:35:26.061708080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 14 13:35:26.061865 containerd[1768]: time="2025-01-14T13:35:26.061720520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 14 13:35:26.061865 containerd[1768]: time="2025-01-14T13:35:26.061729840Z" level=info msg="NRI interface is disabled by configuration." Jan 14 13:35:26.061865 containerd[1768]: time="2025-01-14T13:35:26.061739320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 14 13:35:26.066056 containerd[1768]: time="2025-01-14T13:35:26.062040400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 14 13:35:26.066056 containerd[1768]: time="2025-01-14T13:35:26.062235800Z" level=info msg="Connect containerd service" Jan 14 13:35:26.066056 containerd[1768]: time="2025-01-14T13:35:26.063639000Z" level=info msg="using legacy CRI server" Jan 14 13:35:26.066056 containerd[1768]: time="2025-01-14T13:35:26.063815880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 13:35:26.066056 containerd[1768]: time="2025-01-14T13:35:26.064278800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 14 13:35:26.066056 containerd[1768]: time="2025-01-14T13:35:26.064951520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:35:26.066056 containerd[1768]: time="2025-01-14T13:35:26.065554600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 13:35:26.066056 containerd[1768]: time="2025-01-14T13:35:26.065594320Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 13:35:26.066056 containerd[1768]: time="2025-01-14T13:35:26.065640200Z" level=info msg="Start subscribing containerd event" Jan 14 13:35:26.066056 containerd[1768]: time="2025-01-14T13:35:26.065671400Z" level=info msg="Start recovering state" Jan 14 13:35:26.066056 containerd[1768]: time="2025-01-14T13:35:26.065734520Z" level=info msg="Start event monitor" Jan 14 13:35:26.066056 containerd[1768]: time="2025-01-14T13:35:26.065744640Z" level=info msg="Start snapshots syncer" Jan 14 13:35:26.066056 containerd[1768]: time="2025-01-14T13:35:26.065754920Z" level=info msg="Start cni network conf syncer for default" Jan 14 13:35:26.066056 containerd[1768]: time="2025-01-14T13:35:26.065766480Z" level=info msg="Start streaming server" Jan 14 13:35:26.066056 containerd[1768]: time="2025-01-14T13:35:26.065987840Z" level=info msg="containerd successfully booted in 0.100257s" Jan 14 13:35:26.069684 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 14 13:35:26.077273 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 13:35:26.083053 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 13:35:26.157227 tar[1766]: linux-arm64/LICENSE Jan 14 13:35:26.157307 tar[1766]: linux-arm64/README.md Jan 14 13:35:26.174769 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 13:35:26.186509 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:35:26.187164 (kubelet)[1898]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:35:26.194122 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 13:35:26.206477 systemd[1]: Startup finished in 678ms (kernel) + 13.399s (initrd) + 11.248s (userspace) = 25.326s. Jan 14 13:35:26.229427 agetty[1890]: failed to open credentials directory Jan 14 13:35:26.230905 agetty[1889]: failed to open credentials directory Jan 14 13:35:26.627512 kubelet[1898]: E0114 13:35:26.627414 1898 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:35:26.630091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:35:26.630229 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:35:26.658890 login[1890]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 14 13:35:26.659379 login[1889]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:35:26.666771 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 13:35:26.673661 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 13:35:26.675882 systemd-logind[1740]: New session 1 of user core. Jan 14 13:35:26.687897 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 13:35:26.692584 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 13:35:26.700224 (systemd)[1913]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 14 13:35:26.850576 systemd[1913]: Queued start job for default target default.target. Jan 14 13:35:26.857592 systemd[1913]: Created slice app.slice - User Application Slice. Jan 14 13:35:26.857622 systemd[1913]: Reached target paths.target - Paths. Jan 14 13:35:26.857633 systemd[1913]: Reached target timers.target - Timers. Jan 14 13:35:26.858826 systemd[1913]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 13:35:26.868726 systemd[1913]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 13:35:26.869297 systemd[1913]: Reached target sockets.target - Sockets. Jan 14 13:35:26.869310 systemd[1913]: Reached target basic.target - Basic System. Jan 14 13:35:26.869489 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 13:35:26.869954 systemd[1913]: Reached target default.target - Main User Target. Jan 14 13:35:26.870008 systemd[1913]: Startup finished in 163ms. Jan 14 13:35:26.870894 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 13:35:27.660444 login[1890]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:35:27.665075 systemd-logind[1740]: New session 2 of user core. Jan 14 13:35:27.672520 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 13:35:27.926431 waagent[1881]: 2025-01-14T13:35:27.926255Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 14 13:35:27.932249 waagent[1881]: 2025-01-14T13:35:27.932181Z INFO Daemon Daemon OS: flatcar 4186.1.0 Jan 14 13:35:27.937549 waagent[1881]: 2025-01-14T13:35:27.937500Z INFO Daemon Daemon Python: 3.11.10 Jan 14 13:35:27.941996 waagent[1881]: 2025-01-14T13:35:27.941943Z INFO Daemon Daemon Run daemon Jan 14 13:35:27.946071 waagent[1881]: 2025-01-14T13:35:27.946025Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4186.1.0' Jan 14 13:35:27.955655 waagent[1881]: 2025-01-14T13:35:27.955591Z INFO Daemon Daemon Using waagent for provisioning Jan 14 13:35:27.961023 waagent[1881]: 2025-01-14T13:35:27.960979Z INFO Daemon Daemon Activate resource disk Jan 14 13:35:27.965958 waagent[1881]: 2025-01-14T13:35:27.965915Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 14 13:35:27.979081 waagent[1881]: 2025-01-14T13:35:27.979029Z INFO Daemon Daemon Found device: None Jan 14 13:35:27.983983 waagent[1881]: 2025-01-14T13:35:27.983937Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 14 13:35:27.992434 waagent[1881]: 2025-01-14T13:35:27.992390Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 14 13:35:28.004463 waagent[1881]: 2025-01-14T13:35:28.004415Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 13:35:28.010270 waagent[1881]: 2025-01-14T13:35:28.010226Z INFO Daemon Daemon Running default provisioning handler Jan 14 13:35:28.022658 waagent[1881]: 2025-01-14T13:35:28.022592Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 14 13:35:28.037639 waagent[1881]: 2025-01-14T13:35:28.037574Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 14 13:35:28.047944 waagent[1881]: 2025-01-14T13:35:28.047886Z INFO Daemon Daemon cloud-init is enabled: False Jan 14 13:35:28.053667 waagent[1881]: 2025-01-14T13:35:28.053601Z INFO Daemon Daemon Copying ovf-env.xml Jan 14 13:35:28.148374 waagent[1881]: 2025-01-14T13:35:28.147853Z INFO Daemon Daemon Successfully mounted dvd Jan 14 13:35:28.177033 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 14 13:35:28.182382 waagent[1881]: 2025-01-14T13:35:28.179087Z INFO Daemon Daemon Detect protocol endpoint Jan 14 13:35:28.184236 waagent[1881]: 2025-01-14T13:35:28.184183Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 13:35:28.190454 waagent[1881]: 2025-01-14T13:35:28.190406Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 14 13:35:28.197439 waagent[1881]: 2025-01-14T13:35:28.197382Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 14 13:35:28.204381 waagent[1881]: 2025-01-14T13:35:28.204301Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 14 13:35:28.210279 waagent[1881]: 2025-01-14T13:35:28.210220Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 14 13:35:28.257828 waagent[1881]: 2025-01-14T13:35:28.257780Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 14 13:35:28.264614 waagent[1881]: 2025-01-14T13:35:28.264585Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 14 13:35:28.270423 waagent[1881]: 2025-01-14T13:35:28.270382Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 14 13:35:28.700670 waagent[1881]: 2025-01-14T13:35:28.700573Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 14 13:35:28.707775 waagent[1881]: 2025-01-14T13:35:28.707710Z INFO Daemon Daemon Forcing an update of the goal state. Jan 14 13:35:28.716995 waagent[1881]: 2025-01-14T13:35:28.716945Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 13:35:28.737159 waagent[1881]: 2025-01-14T13:35:28.737114Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 14 13:35:28.743571 waagent[1881]: 2025-01-14T13:35:28.743523Z INFO Daemon Jan 14 13:35:28.746495 waagent[1881]: 2025-01-14T13:35:28.746445Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 7d041a93-f6b5-44c0-8f5e-273dd0eb390a eTag: 6573558071817485625 source: Fabric] Jan 14 13:35:28.758192 waagent[1881]: 2025-01-14T13:35:28.758143Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 14 13:35:28.765140 waagent[1881]: 2025-01-14T13:35:28.765094Z INFO Daemon Jan 14 13:35:28.768048 waagent[1881]: 2025-01-14T13:35:28.768002Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 14 13:35:28.780049 waagent[1881]: 2025-01-14T13:35:28.780013Z INFO Daemon Daemon Downloading artifacts profile blob Jan 14 13:35:28.860900 waagent[1881]: 2025-01-14T13:35:28.860807Z INFO Daemon Downloaded certificate {'thumbprint': 'AE9DBEE641B80C7EB40C5FA8BADB08564F1150B0', 'hasPrivateKey': True} Jan 14 13:35:28.871793 waagent[1881]: 2025-01-14T13:35:28.871738Z INFO Daemon Downloaded certificate {'thumbprint': '5369DC30B65CE848CADAAF06F56359B91B061725', 'hasPrivateKey': False} Jan 14 13:35:28.882994 waagent[1881]: 2025-01-14T13:35:28.882936Z INFO Daemon Fetch goal state completed Jan 14 13:35:28.895457 waagent[1881]: 2025-01-14T13:35:28.895406Z INFO Daemon Daemon Starting provisioning Jan 14 13:35:28.900557 waagent[1881]: 2025-01-14T13:35:28.900503Z INFO Daemon Daemon Handle ovf-env.xml. Jan 14 13:35:28.904998 waagent[1881]: 2025-01-14T13:35:28.904955Z INFO Daemon Daemon Set hostname [ci-4186.1.0-a-8a230934f7] Jan 14 13:35:28.928390 waagent[1881]: 2025-01-14T13:35:28.927861Z INFO Daemon Daemon Publish hostname [ci-4186.1.0-a-8a230934f7] Jan 14 13:35:28.935940 waagent[1881]: 2025-01-14T13:35:28.935036Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 14 13:35:28.942228 waagent[1881]: 2025-01-14T13:35:28.942171Z INFO Daemon Daemon Primary interface is [eth0] Jan 14 13:35:28.978012 systemd-networkd[1340]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:35:28.978021 systemd-networkd[1340]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:35:28.978049 systemd-networkd[1340]: eth0: DHCP lease lost Jan 14 13:35:28.979330 waagent[1881]: 2025-01-14T13:35:28.979256Z INFO Daemon Daemon Create user account if not exists Jan 14 13:35:28.985281 waagent[1881]: 2025-01-14T13:35:28.985226Z INFO Daemon Daemon User core already exists, skip useradd Jan 14 13:35:28.991737 waagent[1881]: 2025-01-14T13:35:28.991685Z INFO Daemon Daemon Configure sudoer Jan 14 13:35:28.992435 systemd-networkd[1340]: eth0: DHCPv6 lease lost Jan 14 13:35:28.996781 waagent[1881]: 2025-01-14T13:35:28.996725Z INFO Daemon Daemon Configure sshd Jan 14 13:35:29.001424 waagent[1881]: 2025-01-14T13:35:29.001338Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 14 13:35:29.018963 waagent[1881]: 2025-01-14T13:35:29.014341Z INFO Daemon Daemon Deploy ssh public key. Jan 14 13:35:29.026416 systemd-networkd[1340]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 14 13:35:30.150222 waagent[1881]: 2025-01-14T13:35:30.150165Z INFO Daemon Daemon Provisioning complete Jan 14 13:35:30.168443 waagent[1881]: 2025-01-14T13:35:30.168395Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 14 13:35:30.175213 waagent[1881]: 2025-01-14T13:35:30.175168Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 14 13:35:30.185802 waagent[1881]: 2025-01-14T13:35:30.185750Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 14 13:35:30.359325 waagent[1967]: 2025-01-14T13:35:30.358832Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 14 13:35:30.359325 waagent[1967]: 2025-01-14T13:35:30.358976Z INFO ExtHandler ExtHandler OS: flatcar 4186.1.0 Jan 14 13:35:30.359325 waagent[1967]: 2025-01-14T13:35:30.359026Z INFO ExtHandler ExtHandler Python: 3.11.10 Jan 14 13:35:30.664394 waagent[1967]: 2025-01-14T13:35:30.664277Z INFO ExtHandler ExtHandler Distro: flatcar-4186.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 14 13:35:30.664582 waagent[1967]: 2025-01-14T13:35:30.664541Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:35:30.664649 waagent[1967]: 2025-01-14T13:35:30.664618Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:35:30.672288 waagent[1967]: 2025-01-14T13:35:30.672232Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 13:35:30.678783 waagent[1967]: 2025-01-14T13:35:30.678743Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 14 13:35:30.679229 waagent[1967]: 2025-01-14T13:35:30.679186Z INFO ExtHandler Jan 14 13:35:30.679295 waagent[1967]: 2025-01-14T13:35:30.679266Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: bbe3df8e-c902-46d7-8e8a-606b3fc04359 eTag: 6573558071817485625 source: Fabric] Jan 14 13:35:30.679604 waagent[1967]: 2025-01-14T13:35:30.679564Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 13:35:30.694207 waagent[1967]: 2025-01-14T13:35:30.693568Z INFO ExtHandler Jan 14 13:35:30.694207 waagent[1967]: 2025-01-14T13:35:30.693725Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 14 13:35:30.698076 waagent[1967]: 2025-01-14T13:35:30.698037Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 13:35:30.853386 waagent[1967]: 2025-01-14T13:35:30.852680Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AE9DBEE641B80C7EB40C5FA8BADB08564F1150B0', 'hasPrivateKey': True} Jan 14 13:35:30.853386 waagent[1967]: 2025-01-14T13:35:30.853112Z INFO ExtHandler Downloaded certificate {'thumbprint': '5369DC30B65CE848CADAAF06F56359B91B061725', 'hasPrivateKey': False} Jan 14 13:35:30.853577 waagent[1967]: 2025-01-14T13:35:30.853530Z INFO ExtHandler Fetch goal state completed Jan 14 13:35:30.874873 waagent[1967]: 2025-01-14T13:35:30.874824Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1967 Jan 14 13:35:30.875012 waagent[1967]: 2025-01-14T13:35:30.874977Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 14 13:35:30.876592 waagent[1967]: 2025-01-14T13:35:30.876549Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4186.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 14 13:35:30.876968 waagent[1967]: 2025-01-14T13:35:30.876928Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 14 13:35:30.904621 waagent[1967]: 2025-01-14T13:35:30.904581Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 14 13:35:30.904797 waagent[1967]: 2025-01-14T13:35:30.904757Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 14 13:35:30.910970 waagent[1967]: 2025-01-14T13:35:30.910524Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 14 13:35:30.916346 systemd[1]: Reloading requested from client PID 1982 ('systemctl') (unit waagent.service)... Jan 14 13:35:30.916373 systemd[1]: Reloading... Jan 14 13:35:30.976470 zram_generator::config[2015]: No configuration found. Jan 14 13:35:31.086328 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:35:31.163391 systemd[1]: Reloading finished in 246 ms. Jan 14 13:35:31.184856 waagent[1967]: 2025-01-14T13:35:31.184464Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 14 13:35:31.191853 systemd[1]: Reloading requested from client PID 2070 ('systemctl') (unit waagent.service)... Jan 14 13:35:31.191869 systemd[1]: Reloading... Jan 14 13:35:31.259431 zram_generator::config[2104]: No configuration found. Jan 14 13:35:31.360823 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:35:31.438454 systemd[1]: Reloading finished in 246 ms. Jan 14 13:35:31.463389 waagent[1967]: 2025-01-14T13:35:31.461225Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 14 13:35:31.463389 waagent[1967]: 2025-01-14T13:35:31.461485Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 14 13:35:32.140673 waagent[1967]: 2025-01-14T13:35:32.140597Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 14 13:35:32.141238 waagent[1967]: 2025-01-14T13:35:32.141190Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 14 13:35:32.141997 waagent[1967]: 2025-01-14T13:35:32.141913Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 14 13:35:32.142411 waagent[1967]: 2025-01-14T13:35:32.142331Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 14 13:35:32.143395 waagent[1967]: 2025-01-14T13:35:32.142802Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:35:32.143395 waagent[1967]: 2025-01-14T13:35:32.142888Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:35:32.143395 waagent[1967]: 2025-01-14T13:35:32.143012Z INFO EnvHandler ExtHandler Configure routes Jan 14 13:35:32.143395 waagent[1967]: 2025-01-14T13:35:32.143069Z INFO EnvHandler ExtHandler Gateway:None Jan 14 13:35:32.143395 waagent[1967]: 2025-01-14T13:35:32.143111Z INFO EnvHandler ExtHandler Routes:None Jan 14 13:35:32.143777 waagent[1967]: 2025-01-14T13:35:32.143633Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 14 13:35:32.144037 waagent[1967]: 2025-01-14T13:35:32.143852Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 14 13:35:32.144166 waagent[1967]: 2025-01-14T13:35:32.144125Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:35:32.144304 waagent[1967]: 2025-01-14T13:35:32.144269Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 14 13:35:32.144498 waagent[1967]: 2025-01-14T13:35:32.144460Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:35:32.144641 waagent[1967]: 2025-01-14T13:35:32.144610Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 14 13:35:32.145130 waagent[1967]: 2025-01-14T13:35:32.144878Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 14 13:35:32.145130 waagent[1967]: 2025-01-14T13:35:32.145057Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 14 13:35:32.145130 waagent[1967]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 14 13:35:32.145130 waagent[1967]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 14 13:35:32.145130 waagent[1967]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 14 13:35:32.145130 waagent[1967]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:35:32.145130 waagent[1967]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:35:32.145130 waagent[1967]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:35:32.145415 waagent[1967]: 2025-01-14T13:35:32.145330Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 14 13:35:32.153001 waagent[1967]: 2025-01-14T13:35:32.152953Z INFO ExtHandler ExtHandler Jan 14 13:35:32.153180 waagent[1967]: 2025-01-14T13:35:32.153143Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 39f01b5c-4a80-4180-b385-39807f1b4fd0 correlation 2e6d44e8-935b-4c0f-a997-c33500991abe created: 2025-01-14T13:34:15.422409Z] Jan 14 13:35:32.154839 waagent[1967]: 2025-01-14T13:35:32.153627Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 13:35:32.154839 waagent[1967]: 2025-01-14T13:35:32.154183Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 14 13:35:32.187377 waagent[1967]: 2025-01-14T13:35:32.187276Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 05E6C7FF-2200-45AB-BD57-CB7544AD472A;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 14 13:35:32.201318 waagent[1967]: 2025-01-14T13:35:32.201252Z INFO MonitorHandler ExtHandler Network interfaces: Jan 14 13:35:32.201318 waagent[1967]: Executing ['ip', '-a', '-o', 'link']: Jan 14 13:35:32.201318 waagent[1967]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 14 13:35:32.201318 waagent[1967]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fb:f7:1d brd ff:ff:ff:ff:ff:ff Jan 14 13:35:32.201318 waagent[1967]: 3: enP20444s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fb:f7:1d brd ff:ff:ff:ff:ff:ff\ altname enP20444p0s2 Jan 14 13:35:32.201318 waagent[1967]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 14 13:35:32.201318 waagent[1967]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 14 13:35:32.201318 waagent[1967]: 2: eth0 inet 10.200.20.12/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 14 13:35:32.201318 waagent[1967]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 14 13:35:32.201318 waagent[1967]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 14 13:35:32.201318 waagent[1967]: 2: eth0 inet6 fe80::20d:3aff:fefb:f71d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 13:35:32.201318 waagent[1967]: 3: enP20444s1 inet6 fe80::20d:3aff:fefb:f71d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 13:35:32.246083 waagent[1967]: 2025-01-14T13:35:32.246003Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 14 13:35:32.246083 waagent[1967]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:35:32.246083 waagent[1967]: pkts bytes target prot opt in out source destination Jan 14 13:35:32.246083 waagent[1967]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:35:32.246083 waagent[1967]: pkts bytes target prot opt in out source destination Jan 14 13:35:32.246083 waagent[1967]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:35:32.246083 waagent[1967]: pkts bytes target prot opt in out source destination Jan 14 13:35:32.246083 waagent[1967]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 13:35:32.246083 waagent[1967]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 13:35:32.246083 waagent[1967]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 13:35:32.248849 waagent[1967]: 2025-01-14T13:35:32.248790Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 14 13:35:32.248849 waagent[1967]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:35:32.248849 waagent[1967]: pkts bytes target prot opt in out source destination Jan 14 13:35:32.248849 waagent[1967]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:35:32.248849 waagent[1967]: pkts bytes target prot opt in out source destination Jan 14 13:35:32.248849 waagent[1967]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:35:32.248849 waagent[1967]: pkts bytes target prot opt in out source destination Jan 14 13:35:32.248849 waagent[1967]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 13:35:32.248849 waagent[1967]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 13:35:32.248849 waagent[1967]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 13:35:32.249078 waagent[1967]: 2025-01-14T13:35:32.249042Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 14 13:35:36.880912 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 13:35:36.890618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:35:36.984924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:35:36.988932 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:35:37.082805 kubelet[2201]: E0114 13:35:37.082717 2201 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:35:37.085807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:35:37.085950 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:35:47.336347 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 13:35:47.346587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:35:47.796023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:35:47.800576 (kubelet)[2217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:35:47.844111 kubelet[2217]: E0114 13:35:47.844060 2217 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:35:47.846075 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:35:47.846204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:35:48.770109 chronyd[1726]: Selected source PHC0 Jan 14 13:35:57.926053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 13:35:57.937536 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:35:58.229286 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:35:58.232667 (kubelet)[2233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:35:58.269187 kubelet[2233]: E0114 13:35:58.269144 2233 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:35:58.271151 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:35:58.271403 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:36:08.292974 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 14 13:36:08.426150 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 13:36:08.434510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:36:08.740004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:36:08.757729 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:36:08.794754 kubelet[2249]: E0114 13:36:08.794659 2249 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:36:08.796844 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:36:08.797042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:36:10.922661 update_engine[1744]: I20250114 13:36:10.922577 1744 update_attempter.cc:509] Updating boot flags... Jan 14 13:36:10.995420 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2272) Jan 14 13:36:11.084532 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2275) Jan 14 13:36:18.925987 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 13:36:18.934508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:36:19.216651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:36:19.219984 (kubelet)[2379]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:36:19.256575 kubelet[2379]: E0114 13:36:19.256492 2379 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:36:19.258818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:36:19.258933 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:36:22.406113 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 13:36:22.408199 systemd[1]: Started sshd@0-10.200.20.12:22-10.200.16.10:51290.service - OpenSSH per-connection server daemon (10.200.16.10:51290). Jan 14 13:36:22.996024 sshd[2388]: Accepted publickey for core from 10.200.16.10 port 51290 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:36:22.997243 sshd-session[2388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:36:23.002045 systemd-logind[1740]: New session 3 of user core. Jan 14 13:36:23.007555 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 13:36:23.417521 systemd[1]: Started sshd@1-10.200.20.12:22-10.200.16.10:51304.service - OpenSSH per-connection server daemon (10.200.16.10:51304). Jan 14 13:36:23.871897 sshd[2393]: Accepted publickey for core from 10.200.16.10 port 51304 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:36:23.873167 sshd-session[2393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:36:23.877184 systemd-logind[1740]: New session 4 of user core. Jan 14 13:36:23.886535 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 13:36:24.200591 sshd[2395]: Connection closed by 10.200.16.10 port 51304 Jan 14 13:36:24.200118 sshd-session[2393]: pam_unix(sshd:session): session closed for user core Jan 14 13:36:24.202366 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 13:36:24.202994 systemd[1]: sshd@1-10.200.20.12:22-10.200.16.10:51304.service: Deactivated successfully. Jan 14 13:36:24.205272 systemd-logind[1740]: Session 4 logged out. Waiting for processes to exit. Jan 14 13:36:24.206084 systemd-logind[1740]: Removed session 4. Jan 14 13:36:24.293260 systemd[1]: Started sshd@2-10.200.20.12:22-10.200.16.10:51320.service - OpenSSH per-connection server daemon (10.200.16.10:51320). Jan 14 13:36:24.773411 sshd[2400]: Accepted publickey for core from 10.200.16.10 port 51320 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:36:24.774622 sshd-session[2400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:36:24.778197 systemd-logind[1740]: New session 5 of user core. Jan 14 13:36:24.787480 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 13:36:25.110382 sshd[2402]: Connection closed by 10.200.16.10 port 51320 Jan 14 13:36:25.110218 sshd-session[2400]: pam_unix(sshd:session): session closed for user core Jan 14 13:36:25.112961 systemd[1]: sshd@2-10.200.20.12:22-10.200.16.10:51320.service: Deactivated successfully. Jan 14 13:36:25.114500 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 13:36:25.115604 systemd-logind[1740]: Session 5 logged out. Waiting for processes to exit. Jan 14 13:36:25.116627 systemd-logind[1740]: Removed session 5. Jan 14 13:36:25.195534 systemd[1]: Started sshd@3-10.200.20.12:22-10.200.16.10:51326.service - OpenSSH per-connection server daemon (10.200.16.10:51326). Jan 14 13:36:25.676059 sshd[2407]: Accepted publickey for core from 10.200.16.10 port 51326 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:36:25.677282 sshd-session[2407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:36:25.681552 systemd-logind[1740]: New session 6 of user core. Jan 14 13:36:25.687469 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 13:36:26.017023 sshd[2409]: Connection closed by 10.200.16.10 port 51326 Jan 14 13:36:26.019074 sshd-session[2407]: pam_unix(sshd:session): session closed for user core Jan 14 13:36:26.022526 systemd[1]: sshd@3-10.200.20.12:22-10.200.16.10:51326.service: Deactivated successfully. Jan 14 13:36:26.024045 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 13:36:26.024627 systemd-logind[1740]: Session 6 logged out. Waiting for processes to exit. Jan 14 13:36:26.025702 systemd-logind[1740]: Removed session 6. Jan 14 13:36:26.102702 systemd[1]: Started sshd@4-10.200.20.12:22-10.200.16.10:35962.service - OpenSSH per-connection server daemon (10.200.16.10:35962). Jan 14 13:36:26.583513 sshd[2414]: Accepted publickey for core from 10.200.16.10 port 35962 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:36:26.584772 sshd-session[2414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:36:26.589419 systemd-logind[1740]: New session 7 of user core. Jan 14 13:36:26.594492 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 13:36:26.992857 sudo[2417]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 13:36:26.993136 sudo[2417]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:36:27.004262 sudo[2417]: pam_unix(sudo:session): session closed for user root Jan 14 13:36:27.091767 sshd[2416]: Connection closed by 10.200.16.10 port 35962 Jan 14 13:36:27.091035 sshd-session[2414]: pam_unix(sshd:session): session closed for user core Jan 14 13:36:27.093952 systemd[1]: sshd@4-10.200.20.12:22-10.200.16.10:35962.service: Deactivated successfully. Jan 14 13:36:27.095642 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 13:36:27.096868 systemd-logind[1740]: Session 7 logged out. Waiting for processes to exit. Jan 14 13:36:27.098110 systemd-logind[1740]: Removed session 7. Jan 14 13:36:27.182905 systemd[1]: Started sshd@5-10.200.20.12:22-10.200.16.10:35972.service - OpenSSH per-connection server daemon (10.200.16.10:35972). Jan 14 13:36:27.663649 sshd[2422]: Accepted publickey for core from 10.200.16.10 port 35972 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:36:27.664903 sshd-session[2422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:36:27.668481 systemd-logind[1740]: New session 8 of user core. Jan 14 13:36:27.676556 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 13:36:27.933369 sudo[2426]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 13:36:27.933631 sudo[2426]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:36:27.937042 sudo[2426]: pam_unix(sudo:session): session closed for user root Jan 14 13:36:27.941252 sudo[2425]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 13:36:27.941538 sudo[2425]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:36:27.958626 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:36:27.980075 augenrules[2448]: No rules Jan 14 13:36:27.981156 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:36:27.981320 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:36:27.983798 sudo[2425]: pam_unix(sudo:session): session closed for user root Jan 14 13:36:28.054652 sshd[2424]: Connection closed by 10.200.16.10 port 35972 Jan 14 13:36:28.055280 sshd-session[2422]: pam_unix(sshd:session): session closed for user core Jan 14 13:36:28.059007 systemd[1]: sshd@5-10.200.20.12:22-10.200.16.10:35972.service: Deactivated successfully. Jan 14 13:36:28.060496 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 13:36:28.061083 systemd-logind[1740]: Session 8 logged out. Waiting for processes to exit. Jan 14 13:36:28.062068 systemd-logind[1740]: Removed session 8. Jan 14 13:36:28.139698 systemd[1]: Started sshd@6-10.200.20.12:22-10.200.16.10:35976.service - OpenSSH per-connection server daemon (10.200.16.10:35976). Jan 14 13:36:28.619976 sshd[2456]: Accepted publickey for core from 10.200.16.10 port 35976 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:36:28.621303 sshd-session[2456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:36:28.625322 systemd-logind[1740]: New session 9 of user core. Jan 14 13:36:28.636496 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 13:36:28.890273 sudo[2459]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 13:36:28.890574 sudo[2459]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:36:29.425990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 14 13:36:29.431569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:36:29.921669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:36:29.925313 (kubelet)[2480]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:36:29.961684 kubelet[2480]: E0114 13:36:29.961628 2480 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:36:29.964399 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:36:29.964633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:36:30.233656 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 13:36:30.233808 (dockerd)[2493]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 13:36:31.050020 dockerd[2493]: time="2025-01-14T13:36:31.049966807Z" level=info msg="Starting up" Jan 14 13:36:31.293038 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3465521455-merged.mount: Deactivated successfully. Jan 14 13:36:31.401421 dockerd[2493]: time="2025-01-14T13:36:31.401270014Z" level=info msg="Loading containers: start." Jan 14 13:36:31.570556 kernel: Initializing XFRM netlink socket Jan 14 13:36:31.703175 systemd-networkd[1340]: docker0: Link UP Jan 14 13:36:31.743488 dockerd[2493]: time="2025-01-14T13:36:31.743389741Z" level=info msg="Loading containers: done." Jan 14 13:36:31.763492 dockerd[2493]: time="2025-01-14T13:36:31.763446742Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 13:36:31.763651 dockerd[2493]: time="2025-01-14T13:36:31.763540262Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 14 13:36:31.763651 dockerd[2493]: time="2025-01-14T13:36:31.763644542Z" level=info msg="Daemon has completed initialization" Jan 14 13:36:31.806841 dockerd[2493]: time="2025-01-14T13:36:31.806739542Z" level=info msg="API listen on /run/docker.sock" Jan 14 13:36:31.807327 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 13:36:32.290584 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4140791851-merged.mount: Deactivated successfully. Jan 14 13:36:33.346962 containerd[1768]: time="2025-01-14T13:36:33.346566934Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 14 13:36:34.343826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2045373223.mount: Deactivated successfully. Jan 14 13:36:36.391397 containerd[1768]: time="2025-01-14T13:36:36.390548078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:36.396730 containerd[1768]: time="2025-01-14T13:36:36.396687758Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864010" Jan 14 13:36:36.400707 containerd[1768]: time="2025-01-14T13:36:36.400672838Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:36.405863 containerd[1768]: time="2025-01-14T13:36:36.405826758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:36.406921 containerd[1768]: time="2025-01-14T13:36:36.406887078Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 3.060257664s" Jan 14 13:36:36.407016 containerd[1768]: time="2025-01-14T13:36:36.407001438Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Jan 14 13:36:36.425613 containerd[1768]: time="2025-01-14T13:36:36.425573798Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 14 13:36:38.978769 containerd[1768]: time="2025-01-14T13:36:38.978707300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:38.981768 containerd[1768]: time="2025-01-14T13:36:38.981726700Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900694" Jan 14 13:36:38.986798 containerd[1768]: time="2025-01-14T13:36:38.986770300Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:38.993718 containerd[1768]: time="2025-01-14T13:36:38.993662781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:38.994910 containerd[1768]: time="2025-01-14T13:36:38.994588621Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 2.568834183s" Jan 14 13:36:38.994910 containerd[1768]: time="2025-01-14T13:36:38.994623941Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Jan 14 13:36:39.014777 containerd[1768]: time="2025-01-14T13:36:39.014746502Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 14 13:36:40.175886 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 14 13:36:40.186620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:36:40.272489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:36:40.281751 (kubelet)[2762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:36:40.318689 kubelet[2762]: E0114 13:36:40.318641 2762 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:36:40.321061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:36:40.321205 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:36:41.014994 containerd[1768]: time="2025-01-14T13:36:41.014576449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:41.017732 containerd[1768]: time="2025-01-14T13:36:41.017698169Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164332" Jan 14 13:36:41.021434 containerd[1768]: time="2025-01-14T13:36:41.021413409Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:41.027296 containerd[1768]: time="2025-01-14T13:36:41.027255770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:41.028718 containerd[1768]: time="2025-01-14T13:36:41.028180890Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 2.013271708s" Jan 14 13:36:41.028718 containerd[1768]: time="2025-01-14T13:36:41.028215970Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Jan 14 13:36:41.048197 containerd[1768]: time="2025-01-14T13:36:41.047917531Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 14 13:36:42.171660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1106312409.mount: Deactivated successfully. Jan 14 13:36:43.112775 containerd[1768]: time="2025-01-14T13:36:43.112584601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:43.118586 containerd[1768]: time="2025-01-14T13:36:43.118445802Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662011" Jan 14 13:36:43.121801 containerd[1768]: time="2025-01-14T13:36:43.121738802Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:43.125978 containerd[1768]: time="2025-01-14T13:36:43.125931162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:43.126629 containerd[1768]: time="2025-01-14T13:36:43.126494922Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 2.078542471s" Jan 14 13:36:43.126629 containerd[1768]: time="2025-01-14T13:36:43.126530042Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Jan 14 13:36:43.146053 containerd[1768]: time="2025-01-14T13:36:43.145884163Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 14 13:36:43.851132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1759633112.mount: Deactivated successfully. Jan 14 13:36:44.759715 containerd[1768]: time="2025-01-14T13:36:44.759652409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:44.761901 containerd[1768]: time="2025-01-14T13:36:44.761850810Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 14 13:36:44.766271 containerd[1768]: time="2025-01-14T13:36:44.766228010Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:44.770902 containerd[1768]: time="2025-01-14T13:36:44.770848410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:44.771973 containerd[1768]: time="2025-01-14T13:36:44.771852850Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.625870807s" Jan 14 13:36:44.771973 containerd[1768]: time="2025-01-14T13:36:44.771883850Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 14 13:36:44.791370 containerd[1768]: time="2025-01-14T13:36:44.791309771Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 14 13:36:45.463525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount932668529.mount: Deactivated successfully. Jan 14 13:36:45.487408 containerd[1768]: time="2025-01-14T13:36:45.486654847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:45.489853 containerd[1768]: time="2025-01-14T13:36:45.489674249Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 14 13:36:45.494173 containerd[1768]: time="2025-01-14T13:36:45.494126652Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:45.498861 containerd[1768]: time="2025-01-14T13:36:45.498814015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:45.499612 containerd[1768]: time="2025-01-14T13:36:45.499482135Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 708.115964ms" Jan 14 13:36:45.499612 containerd[1768]: time="2025-01-14T13:36:45.499513815Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 14 13:36:45.517800 containerd[1768]: time="2025-01-14T13:36:45.517732987Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 14 13:36:46.202541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2788054664.mount: Deactivated successfully. Jan 14 13:36:50.141257 containerd[1768]: time="2025-01-14T13:36:50.141149340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:50.143259 containerd[1768]: time="2025-01-14T13:36:50.143196741Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jan 14 13:36:50.147667 containerd[1768]: time="2025-01-14T13:36:50.147616101Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:50.153648 containerd[1768]: time="2025-01-14T13:36:50.153558342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:50.155134 containerd[1768]: time="2025-01-14T13:36:50.154958662Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.637190275s" Jan 14 13:36:50.155134 containerd[1768]: time="2025-01-14T13:36:50.154997382Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 14 13:36:50.425929 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 14 13:36:50.434666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:36:50.518310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:36:50.522500 (kubelet)[2911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:36:50.580605 kubelet[2911]: E0114 13:36:50.580552 2911 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:36:50.583137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:36:50.583264 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:36:55.864064 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:36:55.869571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:36:55.888371 systemd[1]: Reloading requested from client PID 2976 ('systemctl') (unit session-9.scope)... Jan 14 13:36:55.888389 systemd[1]: Reloading... Jan 14 13:36:55.991550 zram_generator::config[3022]: No configuration found. Jan 14 13:36:56.094962 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:36:56.171205 systemd[1]: Reloading finished in 282 ms. Jan 14 13:36:56.214449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:36:56.218892 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:36:56.220412 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 13:36:56.220701 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:36:56.227615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:36:56.325333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:36:56.335704 (kubelet)[3086]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:36:56.372990 kubelet[3086]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:36:56.372990 kubelet[3086]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 13:36:56.372990 kubelet[3086]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:36:56.373335 kubelet[3086]: I0114 13:36:56.373030 3086 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:36:57.229998 kubelet[3086]: I0114 13:36:57.229958 3086 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 14 13:36:57.229998 kubelet[3086]: I0114 13:36:57.229988 3086 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:36:57.230207 kubelet[3086]: I0114 13:36:57.230187 3086 server.go:927] "Client rotation is on, will bootstrap in background" Jan 14 13:36:57.241390 kubelet[3086]: E0114 13:36:57.241335 3086 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.12:6443: connect: connection refused Jan 14 13:36:57.242864 kubelet[3086]: I0114 13:36:57.242758 3086 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:36:57.249931 kubelet[3086]: I0114 13:36:57.249865 3086 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:36:57.251463 kubelet[3086]: I0114 13:36:57.251027 3086 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:36:57.251463 kubelet[3086]: I0114 13:36:57.251058 3086 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-8a230934f7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 13:36:57.251463 kubelet[3086]: I0114 13:36:57.251221 3086 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:36:57.251463 kubelet[3086]: I0114 13:36:57.251230 3086 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 13:36:57.251638 kubelet[3086]: I0114 13:36:57.251344 3086 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:36:57.252115 kubelet[3086]: I0114 13:36:57.252102 3086 kubelet.go:400] "Attempting to sync node with API server" Jan 14 13:36:57.252336 kubelet[3086]: I0114 13:36:57.252324 3086 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:36:57.252441 kubelet[3086]: I0114 13:36:57.252431 3086 kubelet.go:312] "Adding apiserver pod source" Jan 14 13:36:57.252551 kubelet[3086]: I0114 13:36:57.252497 3086 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:36:57.255376 kubelet[3086]: W0114 13:36:57.253725 3086 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-8a230934f7&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jan 14 13:36:57.255376 kubelet[3086]: E0114 13:36:57.253771 3086 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-8a230934f7&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jan 14 13:36:57.255945 kubelet[3086]: W0114 13:36:57.255902 3086 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jan 14 13:36:57.255945 kubelet[3086]: E0114 13:36:57.255946 3086 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jan 14 13:36:57.256375 kubelet[3086]: I0114 13:36:57.256331 3086 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 14 13:36:57.256520 kubelet[3086]: I0114 13:36:57.256500 3086 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:36:57.256554 kubelet[3086]: W0114 13:36:57.256541 3086 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 13:36:57.257038 kubelet[3086]: I0114 13:36:57.257006 3086 server.go:1264] "Started kubelet" Jan 14 13:36:57.264471 kubelet[3086]: E0114 13:36:57.263209 3086 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-a-8a230934f7.181a929ed98acb65 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-8a230934f7,UID:ci-4186.1.0-a-8a230934f7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-8a230934f7,},FirstTimestamp:2025-01-14 13:36:57.256987493 +0000 UTC m=+0.918463529,LastTimestamp:2025-01-14 13:36:57.256987493 +0000 UTC m=+0.918463529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-8a230934f7,}" Jan 14 13:36:57.264471 kubelet[3086]: I0114 13:36:57.263333 3086 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:36:57.264471 kubelet[3086]: I0114 13:36:57.263998 3086 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:36:57.264471 kubelet[3086]: I0114 13:36:57.264314 3086 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:36:57.265608 kubelet[3086]: I0114 13:36:57.265589 3086 server.go:455] "Adding debug handlers to kubelet server" Jan 14 13:36:57.268284 kubelet[3086]: I0114 13:36:57.268209 3086 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:36:57.269308 kubelet[3086]: E0114 13:36:57.269287 3086 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:36:57.271710 kubelet[3086]: E0114 13:36:57.271684 3086 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-8a230934f7\" not found" Jan 14 13:36:57.271920 kubelet[3086]: I0114 13:36:57.271907 3086 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 13:36:57.272079 kubelet[3086]: I0114 13:36:57.272066 3086 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 14 13:36:57.273029 kubelet[3086]: I0114 13:36:57.273012 3086 reconciler.go:26] "Reconciler: start to sync state" Jan 14 13:36:57.274039 kubelet[3086]: W0114 13:36:57.274000 3086 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jan 14 13:36:57.274222 kubelet[3086]: E0114 13:36:57.274159 3086 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jan 14 13:36:57.274325 kubelet[3086]: E0114 13:36:57.274300 3086 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-8a230934f7?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="200ms" Jan 14 13:36:57.275950 kubelet[3086]: I0114 13:36:57.275891 3086 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:36:57.275950 kubelet[3086]: I0114 13:36:57.275910 3086 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:36:57.276055 kubelet[3086]: I0114 13:36:57.275969 3086 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:36:57.300115 kubelet[3086]: I0114 13:36:57.300085 3086 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 13:36:57.300115 kubelet[3086]: I0114 13:36:57.300105 3086 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 13:36:57.300115 kubelet[3086]: I0114 13:36:57.300126 3086 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:36:57.305798 kubelet[3086]: I0114 13:36:57.305772 3086 policy_none.go:49] "None policy: Start" Jan 14 13:36:57.306545 kubelet[3086]: I0114 13:36:57.306465 3086 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 13:36:57.306545 kubelet[3086]: I0114 13:36:57.306547 3086 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:36:57.317405 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 13:36:57.334075 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 13:36:57.335058 kubelet[3086]: I0114 13:36:57.334945 3086 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:36:57.336316 kubelet[3086]: I0114 13:36:57.335986 3086 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:36:57.336316 kubelet[3086]: I0114 13:36:57.336015 3086 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 13:36:57.336316 kubelet[3086]: I0114 13:36:57.336034 3086 kubelet.go:2337] "Starting kubelet main sync loop" Jan 14 13:36:57.336316 kubelet[3086]: E0114 13:36:57.336074 3086 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 13:36:57.339309 kubelet[3086]: W0114 13:36:57.339265 3086 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jan 14 13:36:57.339309 kubelet[3086]: E0114 13:36:57.339302 3086 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jan 14 13:36:57.341061 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 13:36:57.350097 kubelet[3086]: I0114 13:36:57.350079 3086 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:36:57.351376 kubelet[3086]: I0114 13:36:57.351086 3086 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 13:36:57.351376 kubelet[3086]: I0114 13:36:57.351180 3086 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:36:57.352673 kubelet[3086]: E0114 13:36:57.352657 3086 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.0-a-8a230934f7\" not found" Jan 14 13:36:57.373769 kubelet[3086]: I0114 13:36:57.373691 3086 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-8a230934f7" Jan 14 13:36:57.374449 kubelet[3086]: E0114 13:36:57.374424 3086 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4186.1.0-a-8a230934f7" Jan 14 13:36:57.436808 kubelet[3086]: I0114 13:36:57.436509 3086 topology_manager.go:215] "Topology Admit Handler" podUID="83f7e70120f0a5cb31d0fb27e40846d1" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.0-a-8a230934f7" Jan 14 13:36:57.438575 kubelet[3086]: I0114 13:36:57.438544 3086 topology_manager.go:215] "Topology Admit Handler" podUID="c0de61efab7c502d709c86dd11f94deb" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.0-a-8a230934f7" Jan 14 13:36:57.441237 kubelet[3086]: I0114 13:36:57.441217 3086 topology_manager.go:215] "Topology Admit Handler" podUID="8b5ecb10f4fedc36de6909413572772a" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.0-a-8a230934f7" Jan 14 13:36:57.446695 systemd[1]: Created slice kubepods-burstable-pod83f7e70120f0a5cb31d0fb27e40846d1.slice - libcontainer container kubepods-burstable-pod83f7e70120f0a5cb31d0fb27e40846d1.slice. Jan 14 13:36:57.470011 systemd[1]: Created slice kubepods-burstable-podc0de61efab7c502d709c86dd11f94deb.slice - libcontainer container kubepods-burstable-podc0de61efab7c502d709c86dd11f94deb.slice. Jan 14 13:36:57.474385 kubelet[3086]: I0114 13:36:57.473855 3086 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c0de61efab7c502d709c86dd11f94deb-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-8a230934f7\" (UID: \"c0de61efab7c502d709c86dd11f94deb\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-8a230934f7" Jan 14 13:36:57.474385 kubelet[3086]: I0114 13:36:57.473893 3086 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83f7e70120f0a5cb31d0fb27e40846d1-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-8a230934f7\" (UID: \"83f7e70120f0a5cb31d0fb27e40846d1\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-8a230934f7" Jan 14 13:36:57.474385 kubelet[3086]: I0114 13:36:57.473911 3086 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83f7e70120f0a5cb31d0fb27e40846d1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-8a230934f7\" (UID: \"83f7e70120f0a5cb31d0fb27e40846d1\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-8a230934f7" Jan 14 13:36:57.474385 kubelet[3086]: I0114 13:36:57.473932 3086 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c0de61efab7c502d709c86dd11f94deb-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-8a230934f7\" (UID: \"c0de61efab7c502d709c86dd11f94deb\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-8a230934f7" Jan 14 13:36:57.474385 kubelet[3086]: I0114 13:36:57.473948 3086 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c0de61efab7c502d709c86dd11f94deb-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-8a230934f7\" (UID: \"c0de61efab7c502d709c86dd11f94deb\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-8a230934f7" Jan 14 13:36:57.474229 systemd[1]: Created slice kubepods-burstable-pod8b5ecb10f4fedc36de6909413572772a.slice - libcontainer container kubepods-burstable-pod8b5ecb10f4fedc36de6909413572772a.slice. Jan 14 13:36:57.474642 kubelet[3086]: I0114 13:36:57.473964 3086 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c0de61efab7c502d709c86dd11f94deb-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-8a230934f7\" (UID: \"c0de61efab7c502d709c86dd11f94deb\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-8a230934f7" Jan 14 13:36:57.474642 kubelet[3086]: I0114 13:36:57.473980 3086 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c0de61efab7c502d709c86dd11f94deb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-8a230934f7\" (UID: \"c0de61efab7c502d709c86dd11f94deb\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-8a230934f7" Jan 14 13:36:57.474642 kubelet[3086]: I0114 13:36:57.473996 3086 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b5ecb10f4fedc36de6909413572772a-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-8a230934f7\" (UID: \"8b5ecb10f4fedc36de6909413572772a\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-8a230934f7" Jan 14 13:36:57.474642 kubelet[3086]: I0114 13:36:57.474011 3086 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83f7e70120f0a5cb31d0fb27e40846d1-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-8a230934f7\" (UID: \"83f7e70120f0a5cb31d0fb27e40846d1\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-8a230934f7" Jan 14 13:36:57.475333 kubelet[3086]: E0114 13:36:57.475079 3086 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-8a230934f7?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="400ms" Jan 14 13:36:57.577403 kubelet[3086]: I0114 13:36:57.576265 3086 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-8a230934f7" Jan 14 13:36:57.577403 kubelet[3086]: E0114 13:36:57.576638 3086 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4186.1.0-a-8a230934f7" Jan 14 13:36:57.766378 containerd[1768]: time="2025-01-14T13:36:57.766287178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-8a230934f7,Uid:83f7e70120f0a5cb31d0fb27e40846d1,Namespace:kube-system,Attempt:0,}" Jan 14 13:36:57.774311 containerd[1768]: time="2025-01-14T13:36:57.774247458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-8a230934f7,Uid:c0de61efab7c502d709c86dd11f94deb,Namespace:kube-system,Attempt:0,}" Jan 14 13:36:57.776956 containerd[1768]: time="2025-01-14T13:36:57.776804058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-8a230934f7,Uid:8b5ecb10f4fedc36de6909413572772a,Namespace:kube-system,Attempt:0,}" Jan 14 13:36:57.875782 kubelet[3086]: E0114 13:36:57.875667 3086 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-8a230934f7?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="800ms" Jan 14 13:36:57.979185 kubelet[3086]: I0114 13:36:57.978893 3086 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-8a230934f7" Jan 14 13:36:57.979384 kubelet[3086]: E0114 13:36:57.979255 3086 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4186.1.0-a-8a230934f7" Jan 14 13:36:58.125384 kubelet[3086]: W0114 13:36:58.125279 3086 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jan 14 13:36:58.125384 kubelet[3086]: E0114 13:36:58.125344 3086 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jan 14 13:36:58.404508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1012917014.mount: Deactivated successfully. Jan 14 13:36:58.433209 containerd[1768]: time="2025-01-14T13:36:58.432391064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:36:58.445404 containerd[1768]: time="2025-01-14T13:36:58.445330024Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 14 13:36:58.450383 containerd[1768]: time="2025-01-14T13:36:58.450044424Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:36:58.455376 containerd[1768]: time="2025-01-14T13:36:58.454920344Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:36:58.462562 containerd[1768]: time="2025-01-14T13:36:58.462501944Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 13:36:58.464729 kubelet[3086]: W0114 13:36:58.464690 3086 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jan 14 13:36:58.464729 kubelet[3086]: E0114 13:36:58.464733 3086 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jan 14 13:36:58.469379 containerd[1768]: time="2025-01-14T13:36:58.469108784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:36:58.469997 containerd[1768]: time="2025-01-14T13:36:58.469968264Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 703.591406ms" Jan 14 13:36:58.471810 containerd[1768]: time="2025-01-14T13:36:58.471769744Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:36:58.475705 containerd[1768]: time="2025-01-14T13:36:58.475660744Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 13:36:58.483766 containerd[1768]: time="2025-01-14T13:36:58.483729904Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 709.411846ms" Jan 14 13:36:58.564879 containerd[1768]: time="2025-01-14T13:36:58.564832425Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 787.962207ms" Jan 14 13:36:58.634179 kubelet[3086]: W0114 13:36:58.634130 3086 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jan 14 13:36:58.634179 kubelet[3086]: E0114 13:36:58.634179 3086 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jan 14 13:36:58.677004 kubelet[3086]: E0114 13:36:58.676960 3086 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-8a230934f7?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="1.6s" Jan 14 13:36:58.700525 kubelet[3086]: W0114 13:36:58.700441 3086 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-8a230934f7&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jan 14 13:36:58.700525 kubelet[3086]: E0114 13:36:58.700499 3086 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-8a230934f7&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jan 14 13:36:58.781765 kubelet[3086]: I0114 13:36:58.781716 3086 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-8a230934f7" Jan 14 13:36:58.782138 kubelet[3086]: E0114 13:36:58.782111 3086 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4186.1.0-a-8a230934f7" Jan 14 13:36:59.271468 containerd[1768]: time="2025-01-14T13:36:59.271235231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:36:59.272632 containerd[1768]: time="2025-01-14T13:36:59.271566071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:36:59.272977 containerd[1768]: time="2025-01-14T13:36:59.271588631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:36:59.273071 containerd[1768]: time="2025-01-14T13:36:59.272829751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:36:59.273619 containerd[1768]: time="2025-01-14T13:36:59.273299951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:36:59.273619 containerd[1768]: time="2025-01-14T13:36:59.273323271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:36:59.274371 containerd[1768]: time="2025-01-14T13:36:59.274281111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:36:59.275092 containerd[1768]: time="2025-01-14T13:36:59.274982351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:36:59.278829 containerd[1768]: time="2025-01-14T13:36:59.278675711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:36:59.279897 containerd[1768]: time="2025-01-14T13:36:59.279371751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:36:59.279897 containerd[1768]: time="2025-01-14T13:36:59.279408071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:36:59.279897 containerd[1768]: time="2025-01-14T13:36:59.279493191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:36:59.298195 kubelet[3086]: E0114 13:36:59.298155 3086 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.12:6443: connect: connection refused Jan 14 13:36:59.303551 systemd[1]: Started cri-containerd-bc62db0681786788882737d58ca8e53711d3296aa21754df74393497a50845fe.scope - libcontainer container bc62db0681786788882737d58ca8e53711d3296aa21754df74393497a50845fe. Jan 14 13:36:59.311823 systemd[1]: Started cri-containerd-a1c568a391852becdd638e1f58385ddc3693b4a38244f348e85fbed45975f36c.scope - libcontainer container a1c568a391852becdd638e1f58385ddc3693b4a38244f348e85fbed45975f36c. Jan 14 13:36:59.313680 systemd[1]: Started cri-containerd-b7376a1e8a608e7897dc0e3d6d35abd4998aa5d326b46f1c0bcf90071bdbdac8.scope - libcontainer container b7376a1e8a608e7897dc0e3d6d35abd4998aa5d326b46f1c0bcf90071bdbdac8. Jan 14 13:36:59.366119 containerd[1768]: time="2025-01-14T13:36:59.364958872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-8a230934f7,Uid:c0de61efab7c502d709c86dd11f94deb,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc62db0681786788882737d58ca8e53711d3296aa21754df74393497a50845fe\"" Jan 14 13:36:59.366709 containerd[1768]: time="2025-01-14T13:36:59.366211272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-8a230934f7,Uid:83f7e70120f0a5cb31d0fb27e40846d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1c568a391852becdd638e1f58385ddc3693b4a38244f348e85fbed45975f36c\"" Jan 14 13:36:59.373274 containerd[1768]: time="2025-01-14T13:36:59.373078232Z" level=info msg="CreateContainer within sandbox \"a1c568a391852becdd638e1f58385ddc3693b4a38244f348e85fbed45975f36c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 13:36:59.373274 containerd[1768]: time="2025-01-14T13:36:59.373213472Z" level=info msg="CreateContainer within sandbox \"bc62db0681786788882737d58ca8e53711d3296aa21754df74393497a50845fe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 13:36:59.377054 containerd[1768]: time="2025-01-14T13:36:59.376861032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-8a230934f7,Uid:8b5ecb10f4fedc36de6909413572772a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7376a1e8a608e7897dc0e3d6d35abd4998aa5d326b46f1c0bcf90071bdbdac8\"" Jan 14 13:36:59.381385 containerd[1768]: time="2025-01-14T13:36:59.379594952Z" level=info msg="CreateContainer within sandbox \"b7376a1e8a608e7897dc0e3d6d35abd4998aa5d326b46f1c0bcf90071bdbdac8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 13:36:59.734406 kubelet[3086]: E0114 13:36:59.734279 3086 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-a-8a230934f7.181a929ed98acb65 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-8a230934f7,UID:ci-4186.1.0-a-8a230934f7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-8a230934f7,},FirstTimestamp:2025-01-14 13:36:57.256987493 +0000 UTC m=+0.918463529,LastTimestamp:2025-01-14 13:36:57.256987493 +0000 UTC m=+0.918463529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-8a230934f7,}" Jan 14 13:36:59.889254 containerd[1768]: time="2025-01-14T13:36:59.889211476Z" level=info msg="CreateContainer within sandbox \"a1c568a391852becdd638e1f58385ddc3693b4a38244f348e85fbed45975f36c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"843a79e692c70255f4ac49aea4c718abde0e2a01059ae577cfc2cf106e6348d9\"" Jan 14 13:36:59.889858 containerd[1768]: time="2025-01-14T13:36:59.889832156Z" level=info msg="StartContainer for \"843a79e692c70255f4ac49aea4c718abde0e2a01059ae577cfc2cf106e6348d9\"" Jan 14 13:36:59.911571 systemd[1]: Started cri-containerd-843a79e692c70255f4ac49aea4c718abde0e2a01059ae577cfc2cf106e6348d9.scope - libcontainer container 843a79e692c70255f4ac49aea4c718abde0e2a01059ae577cfc2cf106e6348d9. Jan 14 13:36:59.915762 containerd[1768]: time="2025-01-14T13:36:59.915703997Z" level=info msg="CreateContainer within sandbox \"bc62db0681786788882737d58ca8e53711d3296aa21754df74393497a50845fe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bf171081949ca556d2239220d06b0e23971833f5e372b1b959a36b0495d2147d\"" Jan 14 13:36:59.916793 containerd[1768]: time="2025-01-14T13:36:59.916680837Z" level=info msg="StartContainer for \"bf171081949ca556d2239220d06b0e23971833f5e372b1b959a36b0495d2147d\"" Jan 14 13:36:59.924378 containerd[1768]: time="2025-01-14T13:36:59.924305637Z" level=info msg="CreateContainer within sandbox \"b7376a1e8a608e7897dc0e3d6d35abd4998aa5d326b46f1c0bcf90071bdbdac8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"92fea2fc2cff194ad079b0633b1848e7572ec4661ca66b3bbd99d11835ab73e0\"" Jan 14 13:36:59.925493 containerd[1768]: time="2025-01-14T13:36:59.925470597Z" level=info msg="StartContainer for \"92fea2fc2cff194ad079b0633b1848e7572ec4661ca66b3bbd99d11835ab73e0\"" Jan 14 13:36:59.956518 systemd[1]: Started cri-containerd-bf171081949ca556d2239220d06b0e23971833f5e372b1b959a36b0495d2147d.scope - libcontainer container bf171081949ca556d2239220d06b0e23971833f5e372b1b959a36b0495d2147d. Jan 14 13:36:59.960963 systemd[1]: Started cri-containerd-92fea2fc2cff194ad079b0633b1848e7572ec4661ca66b3bbd99d11835ab73e0.scope - libcontainer container 92fea2fc2cff194ad079b0633b1848e7572ec4661ca66b3bbd99d11835ab73e0. Jan 14 13:36:59.979061 containerd[1768]: time="2025-01-14T13:36:59.979008397Z" level=info msg="StartContainer for \"843a79e692c70255f4ac49aea4c718abde0e2a01059ae577cfc2cf106e6348d9\" returns successfully" Jan 14 13:37:00.012130 containerd[1768]: time="2025-01-14T13:37:00.012033878Z" level=info msg="StartContainer for \"bf171081949ca556d2239220d06b0e23971833f5e372b1b959a36b0495d2147d\" returns successfully" Jan 14 13:37:00.037778 containerd[1768]: time="2025-01-14T13:37:00.037660718Z" level=info msg="StartContainer for \"92fea2fc2cff194ad079b0633b1848e7572ec4661ca66b3bbd99d11835ab73e0\" returns successfully" Jan 14 13:37:00.384490 kubelet[3086]: I0114 13:37:00.384367 3086 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-8a230934f7" Jan 14 13:37:01.853556 kubelet[3086]: E0114 13:37:01.853515 3086 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.0-a-8a230934f7\" not found" node="ci-4186.1.0-a-8a230934f7" Jan 14 13:37:01.938178 kubelet[3086]: I0114 13:37:01.938001 3086 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.0-a-8a230934f7" Jan 14 13:37:02.257539 kubelet[3086]: I0114 13:37:02.257498 3086 apiserver.go:52] "Watching apiserver" Jan 14 13:37:02.273218 kubelet[3086]: I0114 13:37:02.273184 3086 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 14 13:37:03.767148 systemd[1]: Reloading requested from client PID 3355 ('systemctl') (unit session-9.scope)... Jan 14 13:37:03.767464 systemd[1]: Reloading... Jan 14 13:37:03.863386 zram_generator::config[3395]: No configuration found. Jan 14 13:37:03.954049 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:37:04.042766 systemd[1]: Reloading finished in 274 ms. Jan 14 13:37:04.077189 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:37:04.077369 kubelet[3086]: E0114 13:37:04.077181 3086 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4186.1.0-a-8a230934f7.181a929ed98acb65 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-8a230934f7,UID:ci-4186.1.0-a-8a230934f7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-8a230934f7,},FirstTimestamp:2025-01-14 13:36:57.256987493 +0000 UTC m=+0.918463529,LastTimestamp:2025-01-14 13:36:57.256987493 +0000 UTC m=+0.918463529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-8a230934f7,}" Jan 14 13:37:04.089848 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 13:37:04.090131 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:37:04.090187 systemd[1]: kubelet.service: Consumed 1.241s CPU time, 113.7M memory peak, 0B memory swap peak. Jan 14 13:37:04.094619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:37:04.204653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:37:04.209421 (kubelet)[3459]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:37:04.277690 kubelet[3459]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:37:04.277690 kubelet[3459]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 13:37:04.277690 kubelet[3459]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:37:04.278013 kubelet[3459]: I0114 13:37:04.277748 3459 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:37:04.281589 kubelet[3459]: I0114 13:37:04.281556 3459 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 14 13:37:04.281589 kubelet[3459]: I0114 13:37:04.281581 3459 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:37:04.281791 kubelet[3459]: I0114 13:37:04.281771 3459 server.go:927] "Client rotation is on, will bootstrap in background" Jan 14 13:37:04.287562 kubelet[3459]: I0114 13:37:04.286843 3459 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 14 13:37:04.288769 kubelet[3459]: I0114 13:37:04.288743 3459 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:37:04.296051 kubelet[3459]: I0114 13:37:04.295903 3459 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:37:04.296704 kubelet[3459]: I0114 13:37:04.296667 3459 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:37:04.296870 kubelet[3459]: I0114 13:37:04.296702 3459 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-8a230934f7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 13:37:04.296964 kubelet[3459]: I0114 13:37:04.296877 3459 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:37:04.296964 kubelet[3459]: I0114 13:37:04.296885 3459 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 13:37:04.296964 kubelet[3459]: I0114 13:37:04.296918 3459 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:37:04.297036 kubelet[3459]: I0114 13:37:04.297015 3459 kubelet.go:400] "Attempting to sync node with API server" Jan 14 13:37:04.297036 kubelet[3459]: I0114 13:37:04.297030 3459 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:37:04.297182 kubelet[3459]: I0114 13:37:04.297055 3459 kubelet.go:312] "Adding apiserver pod source" Jan 14 13:37:04.297182 kubelet[3459]: I0114 13:37:04.297075 3459 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:37:04.301376 kubelet[3459]: I0114 13:37:04.300482 3459 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 14 13:37:04.301376 kubelet[3459]: I0114 13:37:04.300640 3459 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:37:04.301376 kubelet[3459]: I0114 13:37:04.300991 3459 server.go:1264] "Started kubelet" Jan 14 13:37:04.302548 kubelet[3459]: I0114 13:37:04.302490 3459 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:37:04.303194 kubelet[3459]: I0114 13:37:04.303174 3459 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:37:04.303288 kubelet[3459]: I0114 13:37:04.303204 3459 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:37:04.308648 kubelet[3459]: I0114 13:37:04.308617 3459 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:37:04.309623 kubelet[3459]: I0114 13:37:04.309605 3459 server.go:455] "Adding debug handlers to kubelet server" Jan 14 13:37:04.310537 kubelet[3459]: I0114 13:37:04.310522 3459 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 13:37:04.312106 kubelet[3459]: I0114 13:37:04.312090 3459 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 14 13:37:04.312307 kubelet[3459]: I0114 13:37:04.312298 3459 reconciler.go:26] "Reconciler: start to sync state" Jan 14 13:37:04.313826 kubelet[3459]: I0114 13:37:04.313797 3459 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:37:04.314878 kubelet[3459]: I0114 13:37:04.314859 3459 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:37:04.314983 kubelet[3459]: I0114 13:37:04.314973 3459 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 13:37:04.315075 kubelet[3459]: I0114 13:37:04.315047 3459 kubelet.go:2337] "Starting kubelet main sync loop" Jan 14 13:37:04.315215 kubelet[3459]: E0114 13:37:04.315193 3459 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 13:37:04.321138 kubelet[3459]: I0114 13:37:04.321106 3459 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:37:04.321234 kubelet[3459]: I0114 13:37:04.321210 3459 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:37:04.326457 kubelet[3459]: I0114 13:37:04.326339 3459 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:37:04.329172 kubelet[3459]: E0114 13:37:04.328919 3459 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:37:04.397659 kubelet[3459]: I0114 13:37:04.397639 3459 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 13:37:04.397811 kubelet[3459]: I0114 13:37:04.397799 3459 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 13:37:04.397866 kubelet[3459]: I0114 13:37:04.397859 3459 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:37:04.398043 kubelet[3459]: I0114 13:37:04.398031 3459 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 13:37:04.398112 kubelet[3459]: I0114 13:37:04.398091 3459 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 13:37:04.398242 kubelet[3459]: I0114 13:37:04.398233 3459 policy_none.go:49] "None policy: Start" Jan 14 13:37:04.399000 kubelet[3459]: I0114 13:37:04.398985 3459 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 13:37:04.399160 kubelet[3459]: I0114 13:37:04.399151 3459 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:37:04.399325 kubelet[3459]: I0114 13:37:04.399315 3459 state_mem.go:75] "Updated machine memory state" Jan 14 13:37:04.403314 kubelet[3459]: I0114 13:37:04.403298 3459 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:37:04.403973 kubelet[3459]: I0114 13:37:04.403709 3459 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 13:37:04.403973 kubelet[3459]: I0114 13:37:04.403822 3459 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:37:04.413482 kubelet[3459]: I0114 13:37:04.413416 3459 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-8a230934f7" Jan 14 13:37:04.416573 kubelet[3459]: I0114 13:37:04.416395 3459 topology_manager.go:215] "Topology Admit Handler" podUID="83f7e70120f0a5cb31d0fb27e40846d1" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.0-a-8a230934f7" Jan 14 13:37:04.416573 kubelet[3459]: I0114 13:37:04.416499 3459 topology_manager.go:215] "Topology Admit Handler" podUID="c0de61efab7c502d709c86dd11f94deb" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.0-a-8a230934f7" Jan 14 13:37:04.417111 kubelet[3459]: I0114 13:37:04.416690 3459 topology_manager.go:215] "Topology Admit Handler" podUID="8b5ecb10f4fedc36de6909413572772a" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.0-a-8a230934f7" Jan 14 13:37:04.427793 kubelet[3459]: W0114 13:37:04.427763 3459 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:37:04.431574 kubelet[3459]: W0114 13:37:04.430782 3459 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:37:04.431574 kubelet[3459]: I0114 13:37:04.430857 3459 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186.1.0-a-8a230934f7" Jan 14 13:37:04.431574 kubelet[3459]: I0114 13:37:04.430921 3459 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.0-a-8a230934f7" Jan 14 13:37:04.431574 kubelet[3459]: W0114 13:37:04.431045 3459 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:37:04.614294 kubelet[3459]: I0114 13:37:04.613631 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c0de61efab7c502d709c86dd11f94deb-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-8a230934f7\" (UID: \"c0de61efab7c502d709c86dd11f94deb\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-8a230934f7" Jan 14 13:37:04.614294 kubelet[3459]: I0114 13:37:04.613681 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c0de61efab7c502d709c86dd11f94deb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-8a230934f7\" (UID: \"c0de61efab7c502d709c86dd11f94deb\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-8a230934f7" Jan 14 13:37:04.614294 kubelet[3459]: I0114 13:37:04.613709 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c0de61efab7c502d709c86dd11f94deb-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-8a230934f7\" (UID: \"c0de61efab7c502d709c86dd11f94deb\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-8a230934f7" Jan 14 13:37:04.614294 kubelet[3459]: I0114 13:37:04.613731 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b5ecb10f4fedc36de6909413572772a-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-8a230934f7\" (UID: \"8b5ecb10f4fedc36de6909413572772a\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-8a230934f7" Jan 14 13:37:04.614294 kubelet[3459]: I0114 13:37:04.613750 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83f7e70120f0a5cb31d0fb27e40846d1-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-8a230934f7\" (UID: \"83f7e70120f0a5cb31d0fb27e40846d1\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-8a230934f7" Jan 14 13:37:04.614507 kubelet[3459]: I0114 13:37:04.613764 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83f7e70120f0a5cb31d0fb27e40846d1-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-8a230934f7\" (UID: \"83f7e70120f0a5cb31d0fb27e40846d1\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-8a230934f7" Jan 14 13:37:04.614507 kubelet[3459]: I0114 13:37:04.613780 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83f7e70120f0a5cb31d0fb27e40846d1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-8a230934f7\" (UID: \"83f7e70120f0a5cb31d0fb27e40846d1\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-8a230934f7" Jan 14 13:37:04.614507 kubelet[3459]: I0114 13:37:04.613795 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c0de61efab7c502d709c86dd11f94deb-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-8a230934f7\" (UID: \"c0de61efab7c502d709c86dd11f94deb\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-8a230934f7" Jan 14 13:37:04.614507 kubelet[3459]: I0114 13:37:04.613813 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c0de61efab7c502d709c86dd11f94deb-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-8a230934f7\" (UID: \"c0de61efab7c502d709c86dd11f94deb\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-8a230934f7" Jan 14 13:37:05.187289 sudo[3489]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 14 13:37:05.187817 sudo[3489]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 14 13:37:05.299445 kubelet[3459]: I0114 13:37:05.298196 3459 apiserver.go:52] "Watching apiserver" Jan 14 13:37:05.312602 kubelet[3459]: I0114 13:37:05.312551 3459 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 14 13:37:05.386270 kubelet[3459]: W0114 13:37:05.386236 3459 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:37:05.386413 kubelet[3459]: E0114 13:37:05.386297 3459 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-a-8a230934f7\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.0-a-8a230934f7" Jan 14 13:37:05.410692 kubelet[3459]: I0114 13:37:05.410132 3459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.0-a-8a230934f7" podStartSLOduration=1.410114736 podStartE2EDuration="1.410114736s" podCreationTimestamp="2025-01-14 13:37:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:37:05.393190932 +0000 UTC m=+1.180807794" watchObservedRunningTime="2025-01-14 13:37:05.410114736 +0000 UTC m=+1.197731598" Jan 14 13:37:05.421401 kubelet[3459]: I0114 13:37:05.421119 3459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.0-a-8a230934f7" podStartSLOduration=1.421104819 podStartE2EDuration="1.421104819s" podCreationTimestamp="2025-01-14 13:37:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:37:05.410967137 +0000 UTC m=+1.198583999" watchObservedRunningTime="2025-01-14 13:37:05.421104819 +0000 UTC m=+1.208721641" Jan 14 13:37:05.433295 kubelet[3459]: I0114 13:37:05.433239 3459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-8a230934f7" podStartSLOduration=1.433224342 podStartE2EDuration="1.433224342s" podCreationTimestamp="2025-01-14 13:37:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:37:05.421295259 +0000 UTC m=+1.208912121" watchObservedRunningTime="2025-01-14 13:37:05.433224342 +0000 UTC m=+1.220841204" Jan 14 13:37:05.638371 sudo[3489]: pam_unix(sudo:session): session closed for user root Jan 14 13:37:07.563666 sudo[2459]: pam_unix(sudo:session): session closed for user root Jan 14 13:37:07.634365 sshd[2458]: Connection closed by 10.200.16.10 port 35976 Jan 14 13:37:07.634918 sshd-session[2456]: pam_unix(sshd:session): session closed for user core Jan 14 13:37:07.637645 systemd[1]: sshd@6-10.200.20.12:22-10.200.16.10:35976.service: Deactivated successfully. Jan 14 13:37:07.639912 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 13:37:07.640862 systemd[1]: session-9.scope: Consumed 7.733s CPU time, 190.0M memory peak, 0B memory swap peak. Jan 14 13:37:07.642044 systemd-logind[1740]: Session 9 logged out. Waiting for processes to exit. Jan 14 13:37:07.643136 systemd-logind[1740]: Removed session 9. Jan 14 13:37:20.378979 kubelet[3459]: I0114 13:37:20.378252 3459 topology_manager.go:215] "Topology Admit Handler" podUID="17222547-41e8-4fc3-862e-b19f632d5385" podNamespace="kube-system" podName="cilium-operator-599987898-wcwnz" Jan 14 13:37:20.381228 kubelet[3459]: I0114 13:37:20.381041 3459 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 13:37:20.381530 containerd[1768]: time="2025-01-14T13:37:20.381408596Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 13:37:20.383123 kubelet[3459]: I0114 13:37:20.382413 3459 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 13:37:20.390369 systemd[1]: Created slice kubepods-besteffort-pod17222547_41e8_4fc3_862e_b19f632d5385.slice - libcontainer container kubepods-besteffort-pod17222547_41e8_4fc3_862e_b19f632d5385.slice. Jan 14 13:37:20.406378 kubelet[3459]: I0114 13:37:20.406331 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17222547-41e8-4fc3-862e-b19f632d5385-cilium-config-path\") pod \"cilium-operator-599987898-wcwnz\" (UID: \"17222547-41e8-4fc3-862e-b19f632d5385\") " pod="kube-system/cilium-operator-599987898-wcwnz" Jan 14 13:37:20.406378 kubelet[3459]: I0114 13:37:20.406374 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhwvb\" (UniqueName: \"kubernetes.io/projected/17222547-41e8-4fc3-862e-b19f632d5385-kube-api-access-bhwvb\") pod \"cilium-operator-599987898-wcwnz\" (UID: \"17222547-41e8-4fc3-862e-b19f632d5385\") " pod="kube-system/cilium-operator-599987898-wcwnz" Jan 14 13:37:20.504432 kubelet[3459]: I0114 13:37:20.504383 3459 topology_manager.go:215] "Topology Admit Handler" podUID="92d2c137-428d-4189-bfad-296022b5cdc7" podNamespace="kube-system" podName="kube-proxy-sstkv" Jan 14 13:37:20.513167 kubelet[3459]: I0114 13:37:20.512939 3459 topology_manager.go:215] "Topology Admit Handler" podUID="25fbf88b-86d0-48e5-b9a5-948785e2c45b" podNamespace="kube-system" podName="cilium-22szb" Jan 14 13:37:20.513719 systemd[1]: Created slice kubepods-besteffort-pod92d2c137_428d_4189_bfad_296022b5cdc7.slice - libcontainer container kubepods-besteffort-pod92d2c137_428d_4189_bfad_296022b5cdc7.slice. Jan 14 13:37:20.525862 systemd[1]: Created slice kubepods-burstable-pod25fbf88b_86d0_48e5_b9a5_948785e2c45b.slice - libcontainer container kubepods-burstable-pod25fbf88b_86d0_48e5_b9a5_948785e2c45b.slice. Jan 14 13:37:20.607872 kubelet[3459]: I0114 13:37:20.607803 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92d2c137-428d-4189-bfad-296022b5cdc7-kube-proxy\") pod \"kube-proxy-sstkv\" (UID: \"92d2c137-428d-4189-bfad-296022b5cdc7\") " pod="kube-system/kube-proxy-sstkv" Jan 14 13:37:20.607872 kubelet[3459]: I0114 13:37:20.607871 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-cni-path\") pod \"cilium-22szb\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " pod="kube-system/cilium-22szb" Jan 14 13:37:20.608143 kubelet[3459]: I0114 13:37:20.607896 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-etc-cni-netd\") pod \"cilium-22szb\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " pod="kube-system/cilium-22szb" Jan 14 13:37:20.608143 kubelet[3459]: I0114 13:37:20.607928 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25fbf88b-86d0-48e5-b9a5-948785e2c45b-clustermesh-secrets\") pod \"cilium-22szb\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " pod="kube-system/cilium-22szb" Jan 14 13:37:20.608143 kubelet[3459]: I0114 13:37:20.607948 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/25fbf88b-86d0-48e5-b9a5-948785e2c45b-hubble-tls\") pod \"cilium-22szb\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " pod="kube-system/cilium-22szb" Jan 14 13:37:20.608143 kubelet[3459]: I0114 13:37:20.607963 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57vfr\" (UniqueName: \"kubernetes.io/projected/25fbf88b-86d0-48e5-b9a5-948785e2c45b-kube-api-access-57vfr\") pod \"cilium-22szb\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " pod="kube-system/cilium-22szb" Jan 14 13:37:20.608143 kubelet[3459]: I0114 13:37:20.607979 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-host-proc-sys-kernel\") pod \"cilium-22szb\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " pod="kube-system/cilium-22szb" Jan 14 13:37:20.608255 kubelet[3459]: I0114 13:37:20.607994 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-host-proc-sys-net\") pod \"cilium-22szb\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " pod="kube-system/cilium-22szb" Jan 14 13:37:20.608255 kubelet[3459]: I0114 13:37:20.608035 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92d2c137-428d-4189-bfad-296022b5cdc7-xtables-lock\") pod \"kube-proxy-sstkv\" (UID: \"92d2c137-428d-4189-bfad-296022b5cdc7\") " pod="kube-system/kube-proxy-sstkv" Jan 14 13:37:20.608255 kubelet[3459]: I0114 13:37:20.608061 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-bpf-maps\") pod \"cilium-22szb\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " pod="kube-system/cilium-22szb" Jan 14 13:37:20.608255 kubelet[3459]: I0114 13:37:20.608076 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-lib-modules\") pod \"cilium-22szb\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " pod="kube-system/cilium-22szb" Jan 14 13:37:20.608255 kubelet[3459]: I0114 13:37:20.608096 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-xtables-lock\") pod \"cilium-22szb\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " pod="kube-system/cilium-22szb" Jan 14 13:37:20.608255 kubelet[3459]: I0114 13:37:20.608111 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wwqw\" (UniqueName: \"kubernetes.io/projected/92d2c137-428d-4189-bfad-296022b5cdc7-kube-api-access-6wwqw\") pod \"kube-proxy-sstkv\" (UID: \"92d2c137-428d-4189-bfad-296022b5cdc7\") " pod="kube-system/kube-proxy-sstkv" Jan 14 13:37:20.608396 kubelet[3459]: I0114 13:37:20.608127 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-hostproc\") pod \"cilium-22szb\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " pod="kube-system/cilium-22szb" Jan 14 13:37:20.608396 kubelet[3459]: I0114 13:37:20.608142 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-cilium-cgroup\") pod \"cilium-22szb\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " pod="kube-system/cilium-22szb" Jan 14 13:37:20.608396 kubelet[3459]: I0114 13:37:20.608157 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25fbf88b-86d0-48e5-b9a5-948785e2c45b-cilium-config-path\") pod \"cilium-22szb\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " pod="kube-system/cilium-22szb" Jan 14 13:37:20.608396 kubelet[3459]: I0114 13:37:20.608171 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92d2c137-428d-4189-bfad-296022b5cdc7-lib-modules\") pod \"kube-proxy-sstkv\" (UID: \"92d2c137-428d-4189-bfad-296022b5cdc7\") " pod="kube-system/kube-proxy-sstkv" Jan 14 13:37:20.608396 kubelet[3459]: I0114 13:37:20.608188 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-cilium-run\") pod \"cilium-22szb\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " pod="kube-system/cilium-22szb" Jan 14 13:37:20.697081 containerd[1768]: time="2025-01-14T13:37:20.697021986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-wcwnz,Uid:17222547-41e8-4fc3-862e-b19f632d5385,Namespace:kube-system,Attempt:0,}" Jan 14 13:37:20.763857 containerd[1768]: time="2025-01-14T13:37:20.763687233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:37:20.763857 containerd[1768]: time="2025-01-14T13:37:20.763747513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:37:20.763857 containerd[1768]: time="2025-01-14T13:37:20.763804713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:37:20.764384 containerd[1768]: time="2025-01-14T13:37:20.764314153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:37:20.781531 systemd[1]: Started cri-containerd-a5618e23c3360ba1c80b65b91adf91407a04658f45ec7fc72c9b43e96c8b091d.scope - libcontainer container a5618e23c3360ba1c80b65b91adf91407a04658f45ec7fc72c9b43e96c8b091d. Jan 14 13:37:20.808926 containerd[1768]: time="2025-01-14T13:37:20.808894717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-wcwnz,Uid:17222547-41e8-4fc3-862e-b19f632d5385,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5618e23c3360ba1c80b65b91adf91407a04658f45ec7fc72c9b43e96c8b091d\"" Jan 14 13:37:20.811537 containerd[1768]: time="2025-01-14T13:37:20.811435758Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 14 13:37:20.823755 containerd[1768]: time="2025-01-14T13:37:20.823511719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sstkv,Uid:92d2c137-428d-4189-bfad-296022b5cdc7,Namespace:kube-system,Attempt:0,}" Jan 14 13:37:20.830488 containerd[1768]: time="2025-01-14T13:37:20.830457519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-22szb,Uid:25fbf88b-86d0-48e5-b9a5-948785e2c45b,Namespace:kube-system,Attempt:0,}" Jan 14 13:37:20.900233 containerd[1768]: time="2025-01-14T13:37:20.899683806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:37:20.900233 containerd[1768]: time="2025-01-14T13:37:20.899767446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:37:20.900233 containerd[1768]: time="2025-01-14T13:37:20.899796166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:37:20.900233 containerd[1768]: time="2025-01-14T13:37:20.899883766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:37:20.917703 containerd[1768]: time="2025-01-14T13:37:20.917297048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:37:20.918384 containerd[1768]: time="2025-01-14T13:37:20.917911808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:37:20.918384 containerd[1768]: time="2025-01-14T13:37:20.917933208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:37:20.918384 containerd[1768]: time="2025-01-14T13:37:20.918019208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:37:20.921193 systemd[1]: Started cri-containerd-ef1c3cc5afa05162b8972edc82bd26d83327e24471b4fac15bea407fb8643235.scope - libcontainer container ef1c3cc5afa05162b8972edc82bd26d83327e24471b4fac15bea407fb8643235. Jan 14 13:37:20.935499 systemd[1]: Started cri-containerd-76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4.scope - libcontainer container 76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4. Jan 14 13:37:20.955214 containerd[1768]: time="2025-01-14T13:37:20.955054612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sstkv,Uid:92d2c137-428d-4189-bfad-296022b5cdc7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef1c3cc5afa05162b8972edc82bd26d83327e24471b4fac15bea407fb8643235\"" Jan 14 13:37:20.961989 containerd[1768]: time="2025-01-14T13:37:20.961716172Z" level=info msg="CreateContainer within sandbox \"ef1c3cc5afa05162b8972edc82bd26d83327e24471b4fac15bea407fb8643235\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 13:37:20.966889 containerd[1768]: time="2025-01-14T13:37:20.966855253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-22szb,Uid:25fbf88b-86d0-48e5-b9a5-948785e2c45b,Namespace:kube-system,Attempt:0,} returns sandbox id \"76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4\"" Jan 14 13:37:21.009848 containerd[1768]: time="2025-01-14T13:37:21.009739377Z" level=info msg="CreateContainer within sandbox \"ef1c3cc5afa05162b8972edc82bd26d83327e24471b4fac15bea407fb8643235\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6d043d163632cc61e623135f491a2221ca32eaef273c1a720ba293811f2d2920\"" Jan 14 13:37:21.010978 containerd[1768]: time="2025-01-14T13:37:21.010254857Z" level=info msg="StartContainer for \"6d043d163632cc61e623135f491a2221ca32eaef273c1a720ba293811f2d2920\"" Jan 14 13:37:21.033508 systemd[1]: Started cri-containerd-6d043d163632cc61e623135f491a2221ca32eaef273c1a720ba293811f2d2920.scope - libcontainer container 6d043d163632cc61e623135f491a2221ca32eaef273c1a720ba293811f2d2920. Jan 14 13:37:21.063447 containerd[1768]: time="2025-01-14T13:37:21.063400822Z" level=info msg="StartContainer for \"6d043d163632cc61e623135f491a2221ca32eaef273c1a720ba293811f2d2920\" returns successfully" Jan 14 13:37:21.419605 kubelet[3459]: I0114 13:37:21.419402 3459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sstkv" podStartSLOduration=1.419384537 podStartE2EDuration="1.419384537s" podCreationTimestamp="2025-01-14 13:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:37:21.418562057 +0000 UTC m=+17.206178879" watchObservedRunningTime="2025-01-14 13:37:21.419384537 +0000 UTC m=+17.207001399" Jan 14 13:37:22.380951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2885217149.mount: Deactivated successfully. Jan 14 13:37:22.786398 containerd[1768]: time="2025-01-14T13:37:22.785890550Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:37:22.789074 containerd[1768]: time="2025-01-14T13:37:22.789041350Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138282" Jan 14 13:37:22.793798 containerd[1768]: time="2025-01-14T13:37:22.793757710Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:37:22.796306 containerd[1768]: time="2025-01-14T13:37:22.796274231Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.984805913s" Jan 14 13:37:22.796515 containerd[1768]: time="2025-01-14T13:37:22.796308591Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 14 13:37:22.798386 containerd[1768]: time="2025-01-14T13:37:22.798322311Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 14 13:37:22.799021 containerd[1768]: time="2025-01-14T13:37:22.798680391Z" level=info msg="CreateContainer within sandbox \"a5618e23c3360ba1c80b65b91adf91407a04658f45ec7fc72c9b43e96c8b091d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 14 13:37:22.839163 containerd[1768]: time="2025-01-14T13:37:22.839117275Z" level=info msg="CreateContainer within sandbox \"a5618e23c3360ba1c80b65b91adf91407a04658f45ec7fc72c9b43e96c8b091d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e\"" Jan 14 13:37:22.839686 containerd[1768]: time="2025-01-14T13:37:22.839656915Z" level=info msg="StartContainer for \"b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e\"" Jan 14 13:37:22.867572 systemd[1]: Started cri-containerd-b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e.scope - libcontainer container b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e. Jan 14 13:37:22.889945 containerd[1768]: time="2025-01-14T13:37:22.889821600Z" level=info msg="StartContainer for \"b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e\" returns successfully" Jan 14 13:37:24.331029 kubelet[3459]: I0114 13:37:24.330824 3459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-wcwnz" podStartSLOduration=2.344366947 podStartE2EDuration="4.33080646s" podCreationTimestamp="2025-01-14 13:37:20 +0000 UTC" firstStartedPulling="2025-01-14 13:37:20.810597678 +0000 UTC m=+16.598214540" lastFinishedPulling="2025-01-14 13:37:22.797037191 +0000 UTC m=+18.584654053" observedRunningTime="2025-01-14 13:37:23.435748373 +0000 UTC m=+19.223365235" watchObservedRunningTime="2025-01-14 13:37:24.33080646 +0000 UTC m=+20.118423322" Jan 14 13:37:28.120694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2324820452.mount: Deactivated successfully. Jan 14 13:37:30.188977 containerd[1768]: time="2025-01-14T13:37:30.188900741Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:37:30.190957 containerd[1768]: time="2025-01-14T13:37:30.190896101Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650958" Jan 14 13:37:30.196573 containerd[1768]: time="2025-01-14T13:37:30.196510142Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:37:30.198698 containerd[1768]: time="2025-01-14T13:37:30.198664262Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.400279431s" Jan 14 13:37:30.198887 containerd[1768]: time="2025-01-14T13:37:30.198795102Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 14 13:37:30.201119 containerd[1768]: time="2025-01-14T13:37:30.201008022Z" level=info msg="CreateContainer within sandbox \"76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 14 13:37:30.236600 containerd[1768]: time="2025-01-14T13:37:30.236552903Z" level=info msg="CreateContainer within sandbox \"76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b\"" Jan 14 13:37:30.237386 containerd[1768]: time="2025-01-14T13:37:30.237136903Z" level=info msg="StartContainer for \"88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b\"" Jan 14 13:37:30.264612 systemd[1]: Started cri-containerd-88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b.scope - libcontainer container 88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b. Jan 14 13:37:30.295623 systemd[1]: cri-containerd-88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b.scope: Deactivated successfully. Jan 14 13:37:30.297737 containerd[1768]: time="2025-01-14T13:37:30.297110665Z" level=info msg="StartContainer for \"88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b\" returns successfully" Jan 14 13:37:31.221144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b-rootfs.mount: Deactivated successfully. Jan 14 13:37:31.962986 containerd[1768]: time="2025-01-14T13:37:31.962929244Z" level=info msg="shim disconnected" id=88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b namespace=k8s.io Jan 14 13:37:31.963602 containerd[1768]: time="2025-01-14T13:37:31.963274404Z" level=warning msg="cleaning up after shim disconnected" id=88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b namespace=k8s.io Jan 14 13:37:31.963602 containerd[1768]: time="2025-01-14T13:37:31.963291044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:37:32.433297 containerd[1768]: time="2025-01-14T13:37:32.432509980Z" level=info msg="CreateContainer within sandbox \"76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 14 13:37:32.468391 containerd[1768]: time="2025-01-14T13:37:32.468272301Z" level=info msg="CreateContainer within sandbox \"76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3\"" Jan 14 13:37:32.469596 containerd[1768]: time="2025-01-14T13:37:32.468754501Z" level=info msg="StartContainer for \"e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3\"" Jan 14 13:37:32.504514 systemd[1]: Started cri-containerd-e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3.scope - libcontainer container e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3. Jan 14 13:37:32.530418 containerd[1768]: time="2025-01-14T13:37:32.530366344Z" level=info msg="StartContainer for \"e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3\" returns successfully" Jan 14 13:37:32.538204 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:37:32.538419 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:37:32.538482 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:37:32.542720 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:37:32.545827 systemd[1]: cri-containerd-e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3.scope: Deactivated successfully. Jan 14 13:37:32.569596 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:37:32.582938 containerd[1768]: time="2025-01-14T13:37:32.582876385Z" level=info msg="shim disconnected" id=e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3 namespace=k8s.io Jan 14 13:37:32.582938 containerd[1768]: time="2025-01-14T13:37:32.582935745Z" level=warning msg="cleaning up after shim disconnected" id=e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3 namespace=k8s.io Jan 14 13:37:32.582938 containerd[1768]: time="2025-01-14T13:37:32.582944265Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:37:33.436615 containerd[1768]: time="2025-01-14T13:37:33.436489096Z" level=info msg="CreateContainer within sandbox \"76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 14 13:37:33.456080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3-rootfs.mount: Deactivated successfully. Jan 14 13:37:33.693070 containerd[1768]: time="2025-01-14T13:37:33.692952503Z" level=info msg="CreateContainer within sandbox \"76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de\"" Jan 14 13:37:33.693880 containerd[1768]: time="2025-01-14T13:37:33.693583023Z" level=info msg="StartContainer for \"4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de\"" Jan 14 13:37:33.721524 systemd[1]: Started cri-containerd-4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de.scope - libcontainer container 4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de. Jan 14 13:37:33.747984 systemd[1]: cri-containerd-4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de.scope: Deactivated successfully. Jan 14 13:37:33.753827 containerd[1768]: time="2025-01-14T13:37:33.753694333Z" level=info msg="StartContainer for \"4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de\" returns successfully" Jan 14 13:37:34.457807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de-rootfs.mount: Deactivated successfully. Jan 14 13:37:34.495929 containerd[1768]: time="2025-01-14T13:37:34.495851260Z" level=info msg="shim disconnected" id=4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de namespace=k8s.io Jan 14 13:37:34.495929 containerd[1768]: time="2025-01-14T13:37:34.495927860Z" level=warning msg="cleaning up after shim disconnected" id=4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de namespace=k8s.io Jan 14 13:37:34.496290 containerd[1768]: time="2025-01-14T13:37:34.495936860Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:37:35.446395 containerd[1768]: time="2025-01-14T13:37:35.444850688Z" level=info msg="CreateContainer within sandbox \"76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 14 13:37:35.693998 containerd[1768]: time="2025-01-14T13:37:35.693952331Z" level=info msg="CreateContainer within sandbox \"76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac\"" Jan 14 13:37:35.695007 containerd[1768]: time="2025-01-14T13:37:35.694812932Z" level=info msg="StartContainer for \"8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac\"" Jan 14 13:37:35.719539 systemd[1]: Started cri-containerd-8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac.scope - libcontainer container 8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac. Jan 14 13:37:35.738627 systemd[1]: cri-containerd-8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac.scope: Deactivated successfully. Jan 14 13:37:35.743334 containerd[1768]: time="2025-01-14T13:37:35.743236916Z" level=info msg="StartContainer for \"8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac\" returns successfully" Jan 14 13:37:35.760268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac-rootfs.mount: Deactivated successfully. Jan 14 13:37:36.431518 containerd[1768]: time="2025-01-14T13:37:36.431440535Z" level=info msg="shim disconnected" id=8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac namespace=k8s.io Jan 14 13:37:36.431518 containerd[1768]: time="2025-01-14T13:37:36.431513176Z" level=warning msg="cleaning up after shim disconnected" id=8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac namespace=k8s.io Jan 14 13:37:36.431518 containerd[1768]: time="2025-01-14T13:37:36.431523016Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:37:36.503570 containerd[1768]: time="2025-01-14T13:37:36.503418931Z" level=info msg="CreateContainer within sandbox \"76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 14 13:37:36.748918 containerd[1768]: time="2025-01-14T13:37:36.748793772Z" level=info msg="CreateContainer within sandbox \"76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19\"" Jan 14 13:37:36.750795 containerd[1768]: time="2025-01-14T13:37:36.749753853Z" level=info msg="StartContainer for \"b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19\"" Jan 14 13:37:36.778572 systemd[1]: Started cri-containerd-b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19.scope - libcontainer container b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19. Jan 14 13:37:36.805474 containerd[1768]: time="2025-01-14T13:37:36.805414520Z" level=info msg="StartContainer for \"b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19\" returns successfully" Jan 14 13:37:36.882317 kubelet[3459]: I0114 13:37:36.882281 3459 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 14 13:37:36.916024 kubelet[3459]: I0114 13:37:36.915966 3459 topology_manager.go:215] "Topology Admit Handler" podUID="c355f697-b7d1-41ba-b2ae-922cdab4aa41" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9hz4z" Jan 14 13:37:36.923790 kubelet[3459]: I0114 13:37:36.923756 3459 topology_manager.go:215] "Topology Admit Handler" podUID="c25a2ea6-f570-436c-9b20-a026eabed544" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2lz2f" Jan 14 13:37:36.928132 systemd[1]: Created slice kubepods-burstable-podc355f697_b7d1_41ba_b2ae_922cdab4aa41.slice - libcontainer container kubepods-burstable-podc355f697_b7d1_41ba_b2ae_922cdab4aa41.slice. Jan 14 13:37:36.935571 systemd[1]: Created slice kubepods-burstable-podc25a2ea6_f570_436c_9b20_a026eabed544.slice - libcontainer container kubepods-burstable-podc25a2ea6_f570_436c_9b20_a026eabed544.slice. Jan 14 13:37:37.003007 kubelet[3459]: I0114 13:37:37.002728 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c25a2ea6-f570-436c-9b20-a026eabed544-config-volume\") pod \"coredns-7db6d8ff4d-2lz2f\" (UID: \"c25a2ea6-f570-436c-9b20-a026eabed544\") " pod="kube-system/coredns-7db6d8ff4d-2lz2f" Jan 14 13:37:37.003007 kubelet[3459]: I0114 13:37:37.002766 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c355f697-b7d1-41ba-b2ae-922cdab4aa41-config-volume\") pod \"coredns-7db6d8ff4d-9hz4z\" (UID: \"c355f697-b7d1-41ba-b2ae-922cdab4aa41\") " pod="kube-system/coredns-7db6d8ff4d-9hz4z" Jan 14 13:37:37.003007 kubelet[3459]: I0114 13:37:37.002786 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs2fw\" (UniqueName: \"kubernetes.io/projected/c25a2ea6-f570-436c-9b20-a026eabed544-kube-api-access-bs2fw\") pod \"coredns-7db6d8ff4d-2lz2f\" (UID: \"c25a2ea6-f570-436c-9b20-a026eabed544\") " pod="kube-system/coredns-7db6d8ff4d-2lz2f" Jan 14 13:37:37.003007 kubelet[3459]: I0114 13:37:37.002804 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwr4x\" (UniqueName: \"kubernetes.io/projected/c355f697-b7d1-41ba-b2ae-922cdab4aa41-kube-api-access-fwr4x\") pod \"coredns-7db6d8ff4d-9hz4z\" (UID: \"c355f697-b7d1-41ba-b2ae-922cdab4aa41\") " pod="kube-system/coredns-7db6d8ff4d-9hz4z" Jan 14 13:37:37.234530 containerd[1768]: time="2025-01-14T13:37:37.234488452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hz4z,Uid:c355f697-b7d1-41ba-b2ae-922cdab4aa41,Namespace:kube-system,Attempt:0,}" Jan 14 13:37:37.239523 containerd[1768]: time="2025-01-14T13:37:37.239477695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2lz2f,Uid:c25a2ea6-f570-436c-9b20-a026eabed544,Namespace:kube-system,Attempt:0,}" Jan 14 13:37:37.518754 kubelet[3459]: I0114 13:37:37.518678 3459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-22szb" podStartSLOduration=8.287021343 podStartE2EDuration="17.518660472s" podCreationTimestamp="2025-01-14 13:37:20 +0000 UTC" firstStartedPulling="2025-01-14 13:37:20.967878493 +0000 UTC m=+16.755495355" lastFinishedPulling="2025-01-14 13:37:30.199517622 +0000 UTC m=+25.987134484" observedRunningTime="2025-01-14 13:37:37.518509192 +0000 UTC m=+33.306126014" watchObservedRunningTime="2025-01-14 13:37:37.518660472 +0000 UTC m=+33.306277334" Jan 14 13:37:38.944551 systemd-networkd[1340]: cilium_host: Link UP Jan 14 13:37:38.944663 systemd-networkd[1340]: cilium_net: Link UP Jan 14 13:37:38.944941 systemd-networkd[1340]: cilium_net: Gained carrier Jan 14 13:37:38.945081 systemd-networkd[1340]: cilium_host: Gained carrier Jan 14 13:37:38.945163 systemd-networkd[1340]: cilium_net: Gained IPv6LL Jan 14 13:37:38.945277 systemd-networkd[1340]: cilium_host: Gained IPv6LL Jan 14 13:37:39.069834 systemd-networkd[1340]: cilium_vxlan: Link UP Jan 14 13:37:39.069842 systemd-networkd[1340]: cilium_vxlan: Gained carrier Jan 14 13:37:39.343451 kernel: NET: Registered PF_ALG protocol family Jan 14 13:37:40.048703 systemd-networkd[1340]: lxc_health: Link UP Jan 14 13:37:40.057994 systemd-networkd[1340]: lxc_health: Gained carrier Jan 14 13:37:40.354496 systemd-networkd[1340]: lxcf86d060e1111: Link UP Jan 14 13:37:40.364394 kernel: eth0: renamed from tmp70df5 Jan 14 13:37:40.369488 systemd-networkd[1340]: lxcf86d060e1111: Gained carrier Jan 14 13:37:40.605196 systemd-networkd[1340]: lxcb747432e7516: Link UP Jan 14 13:37:40.619377 kernel: eth0: renamed from tmped56f Jan 14 13:37:40.629277 systemd-networkd[1340]: lxcb747432e7516: Gained carrier Jan 14 13:37:40.784530 systemd-networkd[1340]: cilium_vxlan: Gained IPv6LL Jan 14 13:37:41.871563 systemd-networkd[1340]: lxcf86d060e1111: Gained IPv6LL Jan 14 13:37:42.063477 systemd-networkd[1340]: lxc_health: Gained IPv6LL Jan 14 13:37:42.127510 systemd-networkd[1340]: lxcb747432e7516: Gained IPv6LL Jan 14 13:37:43.992721 containerd[1768]: time="2025-01-14T13:37:43.992548110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:37:43.992721 containerd[1768]: time="2025-01-14T13:37:43.992626830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:37:43.993412 containerd[1768]: time="2025-01-14T13:37:43.992644150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:37:43.993412 containerd[1768]: time="2025-01-14T13:37:43.992764870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:37:44.027944 containerd[1768]: time="2025-01-14T13:37:44.027046732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:37:44.027944 containerd[1768]: time="2025-01-14T13:37:44.027106932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:37:44.027944 containerd[1768]: time="2025-01-14T13:37:44.027122132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:37:44.027944 containerd[1768]: time="2025-01-14T13:37:44.027199692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:37:44.031574 systemd[1]: Started cri-containerd-70df53419ae946346667fb0d688e0d327d14235eb0a6ca109bd20db8cba775d8.scope - libcontainer container 70df53419ae946346667fb0d688e0d327d14235eb0a6ca109bd20db8cba775d8. Jan 14 13:37:44.065014 systemd[1]: Started cri-containerd-ed56f232d07835db903e7424bfd6c02be8f3e75412c5ad1f7603bcc9b656bbc1.scope - libcontainer container ed56f232d07835db903e7424bfd6c02be8f3e75412c5ad1f7603bcc9b656bbc1. Jan 14 13:37:44.093762 containerd[1768]: time="2025-01-14T13:37:44.093719811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2lz2f,Uid:c25a2ea6-f570-436c-9b20-a026eabed544,Namespace:kube-system,Attempt:0,} returns sandbox id \"70df53419ae946346667fb0d688e0d327d14235eb0a6ca109bd20db8cba775d8\"" Jan 14 13:37:44.100361 containerd[1768]: time="2025-01-14T13:37:44.100317983Z" level=info msg="CreateContainer within sandbox \"70df53419ae946346667fb0d688e0d327d14235eb0a6ca109bd20db8cba775d8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 13:37:44.125620 containerd[1768]: time="2025-01-14T13:37:44.125572348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hz4z,Uid:c355f697-b7d1-41ba-b2ae-922cdab4aa41,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed56f232d07835db903e7424bfd6c02be8f3e75412c5ad1f7603bcc9b656bbc1\"" Jan 14 13:37:44.130626 containerd[1768]: time="2025-01-14T13:37:44.130525837Z" level=info msg="CreateContainer within sandbox \"ed56f232d07835db903e7424bfd6c02be8f3e75412c5ad1f7603bcc9b656bbc1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 13:37:44.595752 containerd[1768]: time="2025-01-14T13:37:44.595702590Z" level=info msg="CreateContainer within sandbox \"ed56f232d07835db903e7424bfd6c02be8f3e75412c5ad1f7603bcc9b656bbc1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"034d01a4162b26a413df7af7d37b081c605c3f567b698e1c41403110ba1c5f84\"" Jan 14 13:37:44.596620 containerd[1768]: time="2025-01-14T13:37:44.596415031Z" level=info msg="StartContainer for \"034d01a4162b26a413df7af7d37b081c605c3f567b698e1c41403110ba1c5f84\"" Jan 14 13:37:44.643791 containerd[1768]: time="2025-01-14T13:37:44.643740836Z" level=info msg="CreateContainer within sandbox \"70df53419ae946346667fb0d688e0d327d14235eb0a6ca109bd20db8cba775d8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c48776fa06d42dcc62c3f21792150938ef2ec626b3f305916b79c98c3b9f817f\"" Jan 14 13:37:44.644930 containerd[1768]: time="2025-01-14T13:37:44.644866598Z" level=info msg="StartContainer for \"c48776fa06d42dcc62c3f21792150938ef2ec626b3f305916b79c98c3b9f817f\"" Jan 14 13:37:44.667516 systemd[1]: Started cri-containerd-c48776fa06d42dcc62c3f21792150938ef2ec626b3f305916b79c98c3b9f817f.scope - libcontainer container c48776fa06d42dcc62c3f21792150938ef2ec626b3f305916b79c98c3b9f817f. Jan 14 13:37:44.674502 systemd[1]: Started cri-containerd-034d01a4162b26a413df7af7d37b081c605c3f567b698e1c41403110ba1c5f84.scope - libcontainer container 034d01a4162b26a413df7af7d37b081c605c3f567b698e1c41403110ba1c5f84. Jan 14 13:37:44.700823 containerd[1768]: time="2025-01-14T13:37:44.700627858Z" level=info msg="StartContainer for \"c48776fa06d42dcc62c3f21792150938ef2ec626b3f305916b79c98c3b9f817f\" returns successfully" Jan 14 13:37:44.706721 containerd[1768]: time="2025-01-14T13:37:44.706690389Z" level=info msg="StartContainer for \"034d01a4162b26a413df7af7d37b081c605c3f567b698e1c41403110ba1c5f84\" returns successfully" Jan 14 13:37:45.534744 kubelet[3459]: I0114 13:37:45.534674 3459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2lz2f" podStartSLOduration=25.53465936 podStartE2EDuration="25.53465936s" podCreationTimestamp="2025-01-14 13:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:37:45.53292244 +0000 UTC m=+41.320539302" watchObservedRunningTime="2025-01-14 13:37:45.53465936 +0000 UTC m=+41.322276222" Jan 14 13:37:45.552753 kubelet[3459]: I0114 13:37:45.552279 3459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9hz4z" podStartSLOduration=25.552261603 podStartE2EDuration="25.552261603s" podCreationTimestamp="2025-01-14 13:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:37:45.551456762 +0000 UTC m=+41.339073624" watchObservedRunningTime="2025-01-14 13:37:45.552261603 +0000 UTC m=+41.339878465" Jan 14 13:39:29.583251 systemd[1]: Started sshd@7-10.200.20.12:22-10.200.16.10:49214.service - OpenSSH per-connection server daemon (10.200.16.10:49214). Jan 14 13:39:30.033191 sshd[4842]: Accepted publickey for core from 10.200.16.10 port 49214 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:39:30.034554 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:30.038230 systemd-logind[1740]: New session 10 of user core. Jan 14 13:39:30.046559 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 13:39:30.443379 sshd[4844]: Connection closed by 10.200.16.10 port 49214 Jan 14 13:39:30.443758 sshd-session[4842]: pam_unix(sshd:session): session closed for user core Jan 14 13:39:30.446136 systemd[1]: sshd@7-10.200.20.12:22-10.200.16.10:49214.service: Deactivated successfully. Jan 14 13:39:30.447784 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 13:39:30.449161 systemd-logind[1740]: Session 10 logged out. Waiting for processes to exit. Jan 14 13:39:30.451247 systemd-logind[1740]: Removed session 10. Jan 14 13:39:35.524225 systemd[1]: Started sshd@8-10.200.20.12:22-10.200.16.10:49226.service - OpenSSH per-connection server daemon (10.200.16.10:49226). Jan 14 13:39:35.973722 sshd[4855]: Accepted publickey for core from 10.200.16.10 port 49226 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:39:35.975011 sshd-session[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:35.979298 systemd-logind[1740]: New session 11 of user core. Jan 14 13:39:35.984489 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 13:39:36.369122 sshd[4857]: Connection closed by 10.200.16.10 port 49226 Jan 14 13:39:36.370002 sshd-session[4855]: pam_unix(sshd:session): session closed for user core Jan 14 13:39:36.374752 systemd[1]: sshd@8-10.200.20.12:22-10.200.16.10:49226.service: Deactivated successfully. Jan 14 13:39:36.374887 systemd-logind[1740]: Session 11 logged out. Waiting for processes to exit. Jan 14 13:39:36.376931 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 13:39:36.378876 systemd-logind[1740]: Removed session 11. Jan 14 13:39:41.459919 systemd[1]: Started sshd@9-10.200.20.12:22-10.200.16.10:41840.service - OpenSSH per-connection server daemon (10.200.16.10:41840). Jan 14 13:39:41.945995 sshd[4869]: Accepted publickey for core from 10.200.16.10 port 41840 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:39:41.947255 sshd-session[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:41.951650 systemd-logind[1740]: New session 12 of user core. Jan 14 13:39:41.955502 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 13:39:42.358865 sshd[4871]: Connection closed by 10.200.16.10 port 41840 Jan 14 13:39:42.359603 sshd-session[4869]: pam_unix(sshd:session): session closed for user core Jan 14 13:39:42.363029 systemd[1]: sshd@9-10.200.20.12:22-10.200.16.10:41840.service: Deactivated successfully. Jan 14 13:39:42.367474 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 13:39:42.368270 systemd-logind[1740]: Session 12 logged out. Waiting for processes to exit. Jan 14 13:39:42.369139 systemd-logind[1740]: Removed session 12. Jan 14 13:39:47.445543 systemd[1]: Started sshd@10-10.200.20.12:22-10.200.16.10:56306.service - OpenSSH per-connection server daemon (10.200.16.10:56306). Jan 14 13:39:47.897509 sshd[4883]: Accepted publickey for core from 10.200.16.10 port 56306 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:39:47.898703 sshd-session[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:47.902480 systemd-logind[1740]: New session 13 of user core. Jan 14 13:39:47.913478 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 13:39:48.294038 sshd[4885]: Connection closed by 10.200.16.10 port 56306 Jan 14 13:39:48.294534 sshd-session[4883]: pam_unix(sshd:session): session closed for user core Jan 14 13:39:48.297597 systemd[1]: sshd@10-10.200.20.12:22-10.200.16.10:56306.service: Deactivated successfully. Jan 14 13:39:48.299818 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 13:39:48.300539 systemd-logind[1740]: Session 13 logged out. Waiting for processes to exit. Jan 14 13:39:48.301492 systemd-logind[1740]: Removed session 13. Jan 14 13:39:53.375938 systemd[1]: Started sshd@11-10.200.20.12:22-10.200.16.10:56314.service - OpenSSH per-connection server daemon (10.200.16.10:56314). Jan 14 13:39:53.827448 sshd[4899]: Accepted publickey for core from 10.200.16.10 port 56314 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:39:53.828761 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:53.833026 systemd-logind[1740]: New session 14 of user core. Jan 14 13:39:53.836556 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 13:39:54.225811 sshd[4901]: Connection closed by 10.200.16.10 port 56314 Jan 14 13:39:54.225389 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Jan 14 13:39:54.228731 systemd[1]: sshd@11-10.200.20.12:22-10.200.16.10:56314.service: Deactivated successfully. Jan 14 13:39:54.232053 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 13:39:54.232868 systemd-logind[1740]: Session 14 logged out. Waiting for processes to exit. Jan 14 13:39:54.233832 systemd-logind[1740]: Removed session 14. Jan 14 13:39:54.311578 systemd[1]: Started sshd@12-10.200.20.12:22-10.200.16.10:56328.service - OpenSSH per-connection server daemon (10.200.16.10:56328). Jan 14 13:39:54.763222 sshd[4913]: Accepted publickey for core from 10.200.16.10 port 56328 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:39:54.764527 sshd-session[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:54.768206 systemd-logind[1740]: New session 15 of user core. Jan 14 13:39:54.776576 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 13:39:55.199391 sshd[4915]: Connection closed by 10.200.16.10 port 56328 Jan 14 13:39:55.199929 sshd-session[4913]: pam_unix(sshd:session): session closed for user core Jan 14 13:39:55.203267 systemd[1]: sshd@12-10.200.20.12:22-10.200.16.10:56328.service: Deactivated successfully. Jan 14 13:39:55.205191 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 13:39:55.206155 systemd-logind[1740]: Session 15 logged out. Waiting for processes to exit. Jan 14 13:39:55.207312 systemd-logind[1740]: Removed session 15. Jan 14 13:39:55.295586 systemd[1]: Started sshd@13-10.200.20.12:22-10.200.16.10:56336.service - OpenSSH per-connection server daemon (10.200.16.10:56336). Jan 14 13:39:55.773411 sshd[4924]: Accepted publickey for core from 10.200.16.10 port 56336 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:39:55.774671 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:55.778371 systemd-logind[1740]: New session 16 of user core. Jan 14 13:39:55.783554 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 13:39:56.183357 sshd[4926]: Connection closed by 10.200.16.10 port 56336 Jan 14 13:39:56.183904 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Jan 14 13:39:56.187116 systemd[1]: sshd@13-10.200.20.12:22-10.200.16.10:56336.service: Deactivated successfully. Jan 14 13:39:56.188688 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 13:39:56.189308 systemd-logind[1740]: Session 16 logged out. Waiting for processes to exit. Jan 14 13:39:56.191484 systemd-logind[1740]: Removed session 16. Jan 14 13:40:01.265459 systemd[1]: Started sshd@14-10.200.20.12:22-10.200.16.10:58392.service - OpenSSH per-connection server daemon (10.200.16.10:58392). Jan 14 13:40:01.715939 sshd[4937]: Accepted publickey for core from 10.200.16.10 port 58392 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:40:01.717206 sshd-session[4937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:40:01.721027 systemd-logind[1740]: New session 17 of user core. Jan 14 13:40:01.725493 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 13:40:02.113662 sshd[4939]: Connection closed by 10.200.16.10 port 58392 Jan 14 13:40:02.114455 sshd-session[4937]: pam_unix(sshd:session): session closed for user core Jan 14 13:40:02.117743 systemd[1]: sshd@14-10.200.20.12:22-10.200.16.10:58392.service: Deactivated successfully. Jan 14 13:40:02.119280 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 13:40:02.119993 systemd-logind[1740]: Session 17 logged out. Waiting for processes to exit. Jan 14 13:40:02.121100 systemd-logind[1740]: Removed session 17. Jan 14 13:40:07.199990 systemd[1]: Started sshd@15-10.200.20.12:22-10.200.16.10:45456.service - OpenSSH per-connection server daemon (10.200.16.10:45456). Jan 14 13:40:07.682116 sshd[4952]: Accepted publickey for core from 10.200.16.10 port 45456 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:40:07.683389 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:40:07.687739 systemd-logind[1740]: New session 18 of user core. Jan 14 13:40:07.692491 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 13:40:08.105635 sshd[4954]: Connection closed by 10.200.16.10 port 45456 Jan 14 13:40:08.106994 sshd-session[4952]: pam_unix(sshd:session): session closed for user core Jan 14 13:40:08.110293 systemd-logind[1740]: Session 18 logged out. Waiting for processes to exit. Jan 14 13:40:08.111030 systemd[1]: sshd@15-10.200.20.12:22-10.200.16.10:45456.service: Deactivated successfully. Jan 14 13:40:08.113885 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 13:40:08.114965 systemd-logind[1740]: Removed session 18. Jan 14 13:40:08.190781 systemd[1]: Started sshd@16-10.200.20.12:22-10.200.16.10:45462.service - OpenSSH per-connection server daemon (10.200.16.10:45462). Jan 14 13:40:08.671040 sshd[4965]: Accepted publickey for core from 10.200.16.10 port 45462 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:40:08.673011 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:40:08.677404 systemd-logind[1740]: New session 19 of user core. Jan 14 13:40:08.681478 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 13:40:09.114544 sshd[4967]: Connection closed by 10.200.16.10 port 45462 Jan 14 13:40:09.115111 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Jan 14 13:40:09.118289 systemd[1]: sshd@16-10.200.20.12:22-10.200.16.10:45462.service: Deactivated successfully. Jan 14 13:40:09.120954 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 13:40:09.122070 systemd-logind[1740]: Session 19 logged out. Waiting for processes to exit. Jan 14 13:40:09.123514 systemd-logind[1740]: Removed session 19. Jan 14 13:40:09.195418 systemd[1]: Started sshd@17-10.200.20.12:22-10.200.16.10:45472.service - OpenSSH per-connection server daemon (10.200.16.10:45472). Jan 14 13:40:09.645175 sshd[4975]: Accepted publickey for core from 10.200.16.10 port 45472 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:40:09.646466 sshd-session[4975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:40:09.650155 systemd-logind[1740]: New session 20 of user core. Jan 14 13:40:09.658491 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 13:40:11.375638 sshd[4977]: Connection closed by 10.200.16.10 port 45472 Jan 14 13:40:11.376431 sshd-session[4975]: pam_unix(sshd:session): session closed for user core Jan 14 13:40:11.379556 systemd[1]: sshd@17-10.200.20.12:22-10.200.16.10:45472.service: Deactivated successfully. Jan 14 13:40:11.381829 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 13:40:11.383628 systemd-logind[1740]: Session 20 logged out. Waiting for processes to exit. Jan 14 13:40:11.384896 systemd-logind[1740]: Removed session 20. Jan 14 13:40:11.460490 systemd[1]: Started sshd@18-10.200.20.12:22-10.200.16.10:45486.service - OpenSSH per-connection server daemon (10.200.16.10:45486). Jan 14 13:40:11.910545 sshd[4993]: Accepted publickey for core from 10.200.16.10 port 45486 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:40:11.911822 sshd-session[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:40:11.915531 systemd-logind[1740]: New session 21 of user core. Jan 14 13:40:11.926506 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 13:40:12.417391 sshd[4995]: Connection closed by 10.200.16.10 port 45486 Jan 14 13:40:12.417938 sshd-session[4993]: pam_unix(sshd:session): session closed for user core Jan 14 13:40:12.421041 systemd-logind[1740]: Session 21 logged out. Waiting for processes to exit. Jan 14 13:40:12.422614 systemd[1]: sshd@18-10.200.20.12:22-10.200.16.10:45486.service: Deactivated successfully. Jan 14 13:40:12.424838 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 13:40:12.426165 systemd-logind[1740]: Removed session 21. Jan 14 13:40:12.504629 systemd[1]: Started sshd@19-10.200.20.12:22-10.200.16.10:45490.service - OpenSSH per-connection server daemon (10.200.16.10:45490). Jan 14 13:40:12.951698 sshd[5004]: Accepted publickey for core from 10.200.16.10 port 45490 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:40:12.953036 sshd-session[5004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:40:12.957103 systemd-logind[1740]: New session 22 of user core. Jan 14 13:40:12.963561 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 13:40:13.343117 sshd[5006]: Connection closed by 10.200.16.10 port 45490 Jan 14 13:40:13.343747 sshd-session[5004]: pam_unix(sshd:session): session closed for user core Jan 14 13:40:13.346811 systemd[1]: sshd@19-10.200.20.12:22-10.200.16.10:45490.service: Deactivated successfully. Jan 14 13:40:13.348341 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 13:40:13.349081 systemd-logind[1740]: Session 22 logged out. Waiting for processes to exit. Jan 14 13:40:13.349890 systemd-logind[1740]: Removed session 22. Jan 14 13:40:18.426737 systemd[1]: Started sshd@20-10.200.20.12:22-10.200.16.10:49398.service - OpenSSH per-connection server daemon (10.200.16.10:49398). Jan 14 13:40:18.881561 sshd[5021]: Accepted publickey for core from 10.200.16.10 port 49398 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:40:18.882871 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:40:18.887358 systemd-logind[1740]: New session 23 of user core. Jan 14 13:40:18.892500 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 13:40:19.280228 sshd[5023]: Connection closed by 10.200.16.10 port 49398 Jan 14 13:40:19.284387 systemd[1]: sshd@20-10.200.20.12:22-10.200.16.10:49398.service: Deactivated successfully. Jan 14 13:40:19.280772 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Jan 14 13:40:19.286591 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 13:40:19.287779 systemd-logind[1740]: Session 23 logged out. Waiting for processes to exit. Jan 14 13:40:19.288670 systemd-logind[1740]: Removed session 23. Jan 14 13:40:24.363466 systemd[1]: Started sshd@21-10.200.20.12:22-10.200.16.10:49410.service - OpenSSH per-connection server daemon (10.200.16.10:49410). Jan 14 13:40:24.820177 sshd[5036]: Accepted publickey for core from 10.200.16.10 port 49410 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:40:24.821511 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:40:24.826417 systemd-logind[1740]: New session 24 of user core. Jan 14 13:40:24.830473 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 13:40:25.216408 sshd[5041]: Connection closed by 10.200.16.10 port 49410 Jan 14 13:40:25.216999 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Jan 14 13:40:25.220703 systemd[1]: sshd@21-10.200.20.12:22-10.200.16.10:49410.service: Deactivated successfully. Jan 14 13:40:25.223731 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 13:40:25.224802 systemd-logind[1740]: Session 24 logged out. Waiting for processes to exit. Jan 14 13:40:25.226139 systemd-logind[1740]: Removed session 24. Jan 14 13:40:28.882077 update_engine[1744]: I20250114 13:40:28.882016 1744 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 14 13:40:28.882077 update_engine[1744]: I20250114 13:40:28.882073 1744 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 14 13:40:28.882544 update_engine[1744]: I20250114 13:40:28.882228 1744 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 14 13:40:28.882689 update_engine[1744]: I20250114 13:40:28.882658 1744 omaha_request_params.cc:62] Current group set to beta Jan 14 13:40:28.882778 update_engine[1744]: I20250114 13:40:28.882757 1744 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 14 13:40:28.882778 update_engine[1744]: I20250114 13:40:28.882771 1744 update_attempter.cc:643] Scheduling an action processor start. Jan 14 13:40:28.882822 update_engine[1744]: I20250114 13:40:28.882788 1744 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 13:40:28.882822 update_engine[1744]: I20250114 13:40:28.882817 1744 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 14 13:40:28.882991 update_engine[1744]: I20250114 13:40:28.882865 1744 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 13:40:28.882991 update_engine[1744]: I20250114 13:40:28.882877 1744 omaha_request_action.cc:272] Request: Jan 14 13:40:28.882991 update_engine[1744]: Jan 14 13:40:28.882991 update_engine[1744]: Jan 14 13:40:28.882991 update_engine[1744]: Jan 14 13:40:28.882991 update_engine[1744]: Jan 14 13:40:28.882991 update_engine[1744]: Jan 14 13:40:28.882991 update_engine[1744]: Jan 14 13:40:28.882991 update_engine[1744]: Jan 14 13:40:28.882991 update_engine[1744]: Jan 14 13:40:28.882991 update_engine[1744]: I20250114 13:40:28.882884 1744 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 13:40:28.883399 locksmithd[1800]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 14 13:40:28.883901 update_engine[1744]: I20250114 13:40:28.883873 1744 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 13:40:28.884225 update_engine[1744]: I20250114 13:40:28.884186 1744 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 13:40:28.901967 update_engine[1744]: E20250114 13:40:28.901924 1744 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 14 13:40:28.902067 update_engine[1744]: I20250114 13:40:28.902009 1744 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 14 13:40:30.298528 systemd[1]: Started sshd@22-10.200.20.12:22-10.200.16.10:45614.service - OpenSSH per-connection server daemon (10.200.16.10:45614). Jan 14 13:40:30.748459 sshd[5052]: Accepted publickey for core from 10.200.16.10 port 45614 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:40:30.749752 sshd-session[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:40:30.753744 systemd-logind[1740]: New session 25 of user core. Jan 14 13:40:30.762484 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 13:40:31.144471 sshd[5054]: Connection closed by 10.200.16.10 port 45614 Jan 14 13:40:31.145101 sshd-session[5052]: pam_unix(sshd:session): session closed for user core Jan 14 13:40:31.147495 systemd[1]: sshd@22-10.200.20.12:22-10.200.16.10:45614.service: Deactivated successfully. Jan 14 13:40:31.149127 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 13:40:31.150730 systemd-logind[1740]: Session 25 logged out. Waiting for processes to exit. Jan 14 13:40:31.151634 systemd-logind[1740]: Removed session 25. Jan 14 13:40:31.224173 systemd[1]: Started sshd@23-10.200.20.12:22-10.200.16.10:45628.service - OpenSSH per-connection server daemon (10.200.16.10:45628). Jan 14 13:40:31.673585 sshd[5065]: Accepted publickey for core from 10.200.16.10 port 45628 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:40:31.674867 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:40:31.679454 systemd-logind[1740]: New session 26 of user core. Jan 14 13:40:31.683504 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 13:40:33.224953 containerd[1768]: time="2025-01-14T13:40:33.224697768Z" level=info msg="StopContainer for \"b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e\" with timeout 30 (s)" Jan 14 13:40:33.226232 containerd[1768]: time="2025-01-14T13:40:33.225674049Z" level=info msg="Stop container \"b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e\" with signal terminated" Jan 14 13:40:33.231039 containerd[1768]: time="2025-01-14T13:40:33.230751049Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:40:33.238957 containerd[1768]: time="2025-01-14T13:40:33.238923931Z" level=info msg="StopContainer for \"b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19\" with timeout 2 (s)" Jan 14 13:40:33.239216 containerd[1768]: time="2025-01-14T13:40:33.239196971Z" level=info msg="Stop container \"b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19\" with signal terminated" Jan 14 13:40:33.241750 systemd[1]: cri-containerd-b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e.scope: Deactivated successfully. Jan 14 13:40:33.250208 systemd-networkd[1340]: lxc_health: Link DOWN Jan 14 13:40:33.250218 systemd-networkd[1340]: lxc_health: Lost carrier Jan 14 13:40:33.267703 systemd[1]: cri-containerd-b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19.scope: Deactivated successfully. Jan 14 13:40:33.267953 systemd[1]: cri-containerd-b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19.scope: Consumed 6.265s CPU time. Jan 14 13:40:33.274026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e-rootfs.mount: Deactivated successfully. Jan 14 13:40:33.287387 containerd[1768]: time="2025-01-14T13:40:33.287237738Z" level=info msg="shim disconnected" id=b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e namespace=k8s.io Jan 14 13:40:33.287387 containerd[1768]: time="2025-01-14T13:40:33.287306578Z" level=warning msg="cleaning up after shim disconnected" id=b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e namespace=k8s.io Jan 14 13:40:33.287387 containerd[1768]: time="2025-01-14T13:40:33.287316418Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:40:33.293128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19-rootfs.mount: Deactivated successfully. Jan 14 13:40:33.303911 containerd[1768]: time="2025-01-14T13:40:33.303629861Z" level=info msg="shim disconnected" id=b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19 namespace=k8s.io Jan 14 13:40:33.303911 containerd[1768]: time="2025-01-14T13:40:33.303685181Z" level=warning msg="cleaning up after shim disconnected" id=b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19 namespace=k8s.io Jan 14 13:40:33.303911 containerd[1768]: time="2025-01-14T13:40:33.303693781Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:40:33.309585 containerd[1768]: time="2025-01-14T13:40:33.309432461Z" level=info msg="StopContainer for \"b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e\" returns successfully" Jan 14 13:40:33.310341 containerd[1768]: time="2025-01-14T13:40:33.310168822Z" level=info msg="StopPodSandbox for \"a5618e23c3360ba1c80b65b91adf91407a04658f45ec7fc72c9b43e96c8b091d\"" Jan 14 13:40:33.310341 containerd[1768]: time="2025-01-14T13:40:33.310201822Z" level=info msg="Container to stop \"b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:40:33.312024 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5618e23c3360ba1c80b65b91adf91407a04658f45ec7fc72c9b43e96c8b091d-shm.mount: Deactivated successfully. Jan 14 13:40:33.322013 systemd[1]: cri-containerd-a5618e23c3360ba1c80b65b91adf91407a04658f45ec7fc72c9b43e96c8b091d.scope: Deactivated successfully. Jan 14 13:40:33.324127 containerd[1768]: time="2025-01-14T13:40:33.323803944Z" level=info msg="StopContainer for \"b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19\" returns successfully" Jan 14 13:40:33.325643 containerd[1768]: time="2025-01-14T13:40:33.325607664Z" level=info msg="StopPodSandbox for \"76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4\"" Jan 14 13:40:33.325794 containerd[1768]: time="2025-01-14T13:40:33.325771384Z" level=info msg="Container to stop \"88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:40:33.325861 containerd[1768]: time="2025-01-14T13:40:33.325791384Z" level=info msg="Container to stop \"e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:40:33.325894 containerd[1768]: time="2025-01-14T13:40:33.325870224Z" level=info msg="Container to stop \"4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:40:33.325894 containerd[1768]: time="2025-01-14T13:40:33.325886024Z" level=info msg="Container to stop \"8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:40:33.325941 containerd[1768]: time="2025-01-14T13:40:33.325894584Z" level=info msg="Container to stop \"b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:40:33.328162 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4-shm.mount: Deactivated successfully. Jan 14 13:40:33.334571 systemd[1]: cri-containerd-76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4.scope: Deactivated successfully. Jan 14 13:40:33.358410 containerd[1768]: time="2025-01-14T13:40:33.357062189Z" level=info msg="shim disconnected" id=a5618e23c3360ba1c80b65b91adf91407a04658f45ec7fc72c9b43e96c8b091d namespace=k8s.io Jan 14 13:40:33.358410 containerd[1768]: time="2025-01-14T13:40:33.357128829Z" level=warning msg="cleaning up after shim disconnected" id=a5618e23c3360ba1c80b65b91adf91407a04658f45ec7fc72c9b43e96c8b091d namespace=k8s.io Jan 14 13:40:33.358410 containerd[1768]: time="2025-01-14T13:40:33.357136949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:40:33.365024 containerd[1768]: time="2025-01-14T13:40:33.364846870Z" level=info msg="shim disconnected" id=76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4 namespace=k8s.io Jan 14 13:40:33.365024 containerd[1768]: time="2025-01-14T13:40:33.364901870Z" level=warning msg="cleaning up after shim disconnected" id=76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4 namespace=k8s.io Jan 14 13:40:33.365024 containerd[1768]: time="2025-01-14T13:40:33.364909470Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:40:33.378217 containerd[1768]: time="2025-01-14T13:40:33.377792592Z" level=info msg="TearDown network for sandbox \"a5618e23c3360ba1c80b65b91adf91407a04658f45ec7fc72c9b43e96c8b091d\" successfully" Jan 14 13:40:33.378217 containerd[1768]: time="2025-01-14T13:40:33.377842832Z" level=info msg="StopPodSandbox for \"a5618e23c3360ba1c80b65b91adf91407a04658f45ec7fc72c9b43e96c8b091d\" returns successfully" Jan 14 13:40:33.385635 containerd[1768]: time="2025-01-14T13:40:33.385265793Z" level=warning msg="cleanup warnings time=\"2025-01-14T13:40:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 13:40:33.386723 containerd[1768]: time="2025-01-14T13:40:33.386630113Z" level=info msg="TearDown network for sandbox \"76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4\" successfully" Jan 14 13:40:33.386723 containerd[1768]: time="2025-01-14T13:40:33.386658793Z" level=info msg="StopPodSandbox for \"76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4\" returns successfully" Jan 14 13:40:33.393917 kubelet[3459]: I0114 13:40:33.393761 3459 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17222547-41e8-4fc3-862e-b19f632d5385-cilium-config-path\") pod \"17222547-41e8-4fc3-862e-b19f632d5385\" (UID: \"17222547-41e8-4fc3-862e-b19f632d5385\") " Jan 14 13:40:33.393917 kubelet[3459]: I0114 13:40:33.393811 3459 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhwvb\" (UniqueName: \"kubernetes.io/projected/17222547-41e8-4fc3-862e-b19f632d5385-kube-api-access-bhwvb\") pod \"17222547-41e8-4fc3-862e-b19f632d5385\" (UID: \"17222547-41e8-4fc3-862e-b19f632d5385\") " Jan 14 13:40:33.396168 kubelet[3459]: I0114 13:40:33.396130 3459 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17222547-41e8-4fc3-862e-b19f632d5385-kube-api-access-bhwvb" (OuterVolumeSpecName: "kube-api-access-bhwvb") pod "17222547-41e8-4fc3-862e-b19f632d5385" (UID: "17222547-41e8-4fc3-862e-b19f632d5385"). InnerVolumeSpecName "kube-api-access-bhwvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 14 13:40:33.396269 kubelet[3459]: I0114 13:40:33.396246 3459 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17222547-41e8-4fc3-862e-b19f632d5385-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "17222547-41e8-4fc3-862e-b19f632d5385" (UID: "17222547-41e8-4fc3-862e-b19f632d5385"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 14 13:40:33.494210 kubelet[3459]: I0114 13:40:33.494094 3459 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-xtables-lock\") pod \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " Jan 14 13:40:33.494210 kubelet[3459]: I0114 13:40:33.494145 3459 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57vfr\" (UniqueName: \"kubernetes.io/projected/25fbf88b-86d0-48e5-b9a5-948785e2c45b-kube-api-access-57vfr\") pod \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " Jan 14 13:40:33.494210 kubelet[3459]: I0114 13:40:33.494171 3459 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25fbf88b-86d0-48e5-b9a5-948785e2c45b-clustermesh-secrets\") pod \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " Jan 14 13:40:33.494210 kubelet[3459]: I0114 13:40:33.494187 3459 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-host-proc-sys-kernel\") pod \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " Jan 14 13:40:33.494210 kubelet[3459]: I0114 13:40:33.494202 3459 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-cilium-run\") pod \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " Jan 14 13:40:33.494427 kubelet[3459]: I0114 13:40:33.494221 3459 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25fbf88b-86d0-48e5-b9a5-948785e2c45b-cilium-config-path\") pod \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " Jan 14 13:40:33.494427 kubelet[3459]: I0114 13:40:33.494237 3459 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-cni-path\") pod \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " Jan 14 13:40:33.494427 kubelet[3459]: I0114 13:40:33.494251 3459 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/25fbf88b-86d0-48e5-b9a5-948785e2c45b-hubble-tls\") pod \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " Jan 14 13:40:33.494427 kubelet[3459]: I0114 13:40:33.494276 3459 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-bpf-maps\") pod \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " Jan 14 13:40:33.494427 kubelet[3459]: I0114 13:40:33.494290 3459 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-lib-modules\") pod \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " Jan 14 13:40:33.494427 kubelet[3459]: I0114 13:40:33.494305 3459 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-hostproc\") pod \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " Jan 14 13:40:33.494551 kubelet[3459]: I0114 13:40:33.494320 3459 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-etc-cni-netd\") pod \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " Jan 14 13:40:33.494551 kubelet[3459]: I0114 13:40:33.494334 3459 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-host-proc-sys-net\") pod \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " Jan 14 13:40:33.494551 kubelet[3459]: I0114 13:40:33.494365 3459 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-cilium-cgroup\") pod \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\" (UID: \"25fbf88b-86d0-48e5-b9a5-948785e2c45b\") " Jan 14 13:40:33.494551 kubelet[3459]: I0114 13:40:33.494404 3459 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17222547-41e8-4fc3-862e-b19f632d5385-cilium-config-path\") on node \"ci-4186.1.0-a-8a230934f7\" DevicePath \"\"" Jan 14 13:40:33.494551 kubelet[3459]: I0114 13:40:33.494415 3459 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bhwvb\" (UniqueName: \"kubernetes.io/projected/17222547-41e8-4fc3-862e-b19f632d5385-kube-api-access-bhwvb\") on node \"ci-4186.1.0-a-8a230934f7\" DevicePath \"\"" Jan 14 13:40:33.494551 kubelet[3459]: I0114 13:40:33.494464 3459 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "25fbf88b-86d0-48e5-b9a5-948785e2c45b" (UID: "25fbf88b-86d0-48e5-b9a5-948785e2c45b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:40:33.494670 kubelet[3459]: I0114 13:40:33.494498 3459 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "25fbf88b-86d0-48e5-b9a5-948785e2c45b" (UID: "25fbf88b-86d0-48e5-b9a5-948785e2c45b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:40:33.495569 kubelet[3459]: I0114 13:40:33.495396 3459 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "25fbf88b-86d0-48e5-b9a5-948785e2c45b" (UID: "25fbf88b-86d0-48e5-b9a5-948785e2c45b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:40:33.499583 kubelet[3459]: I0114 13:40:33.495547 3459 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "25fbf88b-86d0-48e5-b9a5-948785e2c45b" (UID: "25fbf88b-86d0-48e5-b9a5-948785e2c45b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:40:33.499583 kubelet[3459]: I0114 13:40:33.495561 3459 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-hostproc" (OuterVolumeSpecName: "hostproc") pod "25fbf88b-86d0-48e5-b9a5-948785e2c45b" (UID: "25fbf88b-86d0-48e5-b9a5-948785e2c45b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:40:33.499583 kubelet[3459]: I0114 13:40:33.495573 3459 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "25fbf88b-86d0-48e5-b9a5-948785e2c45b" (UID: "25fbf88b-86d0-48e5-b9a5-948785e2c45b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:40:33.499583 kubelet[3459]: I0114 13:40:33.495585 3459 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "25fbf88b-86d0-48e5-b9a5-948785e2c45b" (UID: "25fbf88b-86d0-48e5-b9a5-948785e2c45b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:40:33.499583 kubelet[3459]: I0114 13:40:33.495801 3459 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "25fbf88b-86d0-48e5-b9a5-948785e2c45b" (UID: "25fbf88b-86d0-48e5-b9a5-948785e2c45b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:40:33.499976 kubelet[3459]: I0114 13:40:33.495824 3459 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "25fbf88b-86d0-48e5-b9a5-948785e2c45b" (UID: "25fbf88b-86d0-48e5-b9a5-948785e2c45b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:40:33.499976 kubelet[3459]: I0114 13:40:33.497536 3459 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-cni-path" (OuterVolumeSpecName: "cni-path") pod "25fbf88b-86d0-48e5-b9a5-948785e2c45b" (UID: "25fbf88b-86d0-48e5-b9a5-948785e2c45b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:40:33.499976 kubelet[3459]: I0114 13:40:33.499736 3459 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25fbf88b-86d0-48e5-b9a5-948785e2c45b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "25fbf88b-86d0-48e5-b9a5-948785e2c45b" (UID: "25fbf88b-86d0-48e5-b9a5-948785e2c45b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 14 13:40:33.500387 kubelet[3459]: I0114 13:40:33.500275 3459 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25fbf88b-86d0-48e5-b9a5-948785e2c45b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "25fbf88b-86d0-48e5-b9a5-948785e2c45b" (UID: "25fbf88b-86d0-48e5-b9a5-948785e2c45b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 14 13:40:33.500486 kubelet[3459]: I0114 13:40:33.500365 3459 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25fbf88b-86d0-48e5-b9a5-948785e2c45b-kube-api-access-57vfr" (OuterVolumeSpecName: "kube-api-access-57vfr") pod "25fbf88b-86d0-48e5-b9a5-948785e2c45b" (UID: "25fbf88b-86d0-48e5-b9a5-948785e2c45b"). InnerVolumeSpecName "kube-api-access-57vfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 14 13:40:33.500486 kubelet[3459]: I0114 13:40:33.500393 3459 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25fbf88b-86d0-48e5-b9a5-948785e2c45b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "25fbf88b-86d0-48e5-b9a5-948785e2c45b" (UID: "25fbf88b-86d0-48e5-b9a5-948785e2c45b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 14 13:40:33.595043 kubelet[3459]: I0114 13:40:33.594945 3459 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-lib-modules\") on node \"ci-4186.1.0-a-8a230934f7\" DevicePath \"\"" Jan 14 13:40:33.595043 kubelet[3459]: I0114 13:40:33.594975 3459 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-hostproc\") on node \"ci-4186.1.0-a-8a230934f7\" DevicePath \"\"" Jan 14 13:40:33.595043 kubelet[3459]: I0114 13:40:33.594983 3459 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-etc-cni-netd\") on node \"ci-4186.1.0-a-8a230934f7\" DevicePath \"\"" Jan 14 13:40:33.595043 kubelet[3459]: I0114 13:40:33.594992 3459 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-host-proc-sys-net\") on node \"ci-4186.1.0-a-8a230934f7\" DevicePath \"\"" Jan 14 13:40:33.595043 kubelet[3459]: I0114 13:40:33.595001 3459 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-cilium-cgroup\") on node \"ci-4186.1.0-a-8a230934f7\" DevicePath \"\"" Jan 14 13:40:33.595043 kubelet[3459]: I0114 13:40:33.595010 3459 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-xtables-lock\") on node \"ci-4186.1.0-a-8a230934f7\" DevicePath \"\"" Jan 14 13:40:33.595043 kubelet[3459]: I0114 13:40:33.595018 3459 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-57vfr\" (UniqueName: \"kubernetes.io/projected/25fbf88b-86d0-48e5-b9a5-948785e2c45b-kube-api-access-57vfr\") on node \"ci-4186.1.0-a-8a230934f7\" DevicePath \"\"" Jan 14 13:40:33.595043 kubelet[3459]: I0114 13:40:33.595029 3459 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25fbf88b-86d0-48e5-b9a5-948785e2c45b-clustermesh-secrets\") on node \"ci-4186.1.0-a-8a230934f7\" DevicePath \"\"" Jan 14 13:40:33.595374 kubelet[3459]: I0114 13:40:33.595037 3459 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-host-proc-sys-kernel\") on node \"ci-4186.1.0-a-8a230934f7\" DevicePath \"\"" Jan 14 13:40:33.595374 kubelet[3459]: I0114 13:40:33.595045 3459 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-cilium-run\") on node \"ci-4186.1.0-a-8a230934f7\" DevicePath \"\"" Jan 14 13:40:33.595374 kubelet[3459]: I0114 13:40:33.595053 3459 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-cni-path\") on node \"ci-4186.1.0-a-8a230934f7\" DevicePath \"\"" Jan 14 13:40:33.595374 kubelet[3459]: I0114 13:40:33.595061 3459 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/25fbf88b-86d0-48e5-b9a5-948785e2c45b-hubble-tls\") on node \"ci-4186.1.0-a-8a230934f7\" DevicePath \"\"" Jan 14 13:40:33.595374 kubelet[3459]: I0114 13:40:33.595067 3459 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25fbf88b-86d0-48e5-b9a5-948785e2c45b-bpf-maps\") on node \"ci-4186.1.0-a-8a230934f7\" DevicePath \"\"" Jan 14 13:40:33.595374 kubelet[3459]: I0114 13:40:33.595076 3459 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25fbf88b-86d0-48e5-b9a5-948785e2c45b-cilium-config-path\") on node \"ci-4186.1.0-a-8a230934f7\" DevicePath \"\"" Jan 14 13:40:33.822158 kubelet[3459]: I0114 13:40:33.821969 3459 scope.go:117] "RemoveContainer" containerID="b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19" Jan 14 13:40:33.827001 containerd[1768]: time="2025-01-14T13:40:33.826724021Z" level=info msg="RemoveContainer for \"b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19\"" Jan 14 13:40:33.829432 systemd[1]: Removed slice kubepods-burstable-pod25fbf88b_86d0_48e5_b9a5_948785e2c45b.slice - libcontainer container kubepods-burstable-pod25fbf88b_86d0_48e5_b9a5_948785e2c45b.slice. Jan 14 13:40:33.829539 systemd[1]: kubepods-burstable-pod25fbf88b_86d0_48e5_b9a5_948785e2c45b.slice: Consumed 6.330s CPU time. Jan 14 13:40:33.834732 systemd[1]: Removed slice kubepods-besteffort-pod17222547_41e8_4fc3_862e_b19f632d5385.slice - libcontainer container kubepods-besteffort-pod17222547_41e8_4fc3_862e_b19f632d5385.slice. Jan 14 13:40:33.838060 containerd[1768]: time="2025-01-14T13:40:33.837979902Z" level=info msg="RemoveContainer for \"b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19\" returns successfully" Jan 14 13:40:33.838790 kubelet[3459]: I0114 13:40:33.838534 3459 scope.go:117] "RemoveContainer" containerID="8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac" Jan 14 13:40:33.839777 containerd[1768]: time="2025-01-14T13:40:33.839711063Z" level=info msg="RemoveContainer for \"8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac\"" Jan 14 13:40:33.849378 containerd[1768]: time="2025-01-14T13:40:33.849310344Z" level=info msg="RemoveContainer for \"8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac\" returns successfully" Jan 14 13:40:33.849622 kubelet[3459]: I0114 13:40:33.849497 3459 scope.go:117] "RemoveContainer" containerID="4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de" Jan 14 13:40:33.852706 containerd[1768]: time="2025-01-14T13:40:33.852440345Z" level=info msg="RemoveContainer for \"4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de\"" Jan 14 13:40:33.861398 containerd[1768]: time="2025-01-14T13:40:33.861343946Z" level=info msg="RemoveContainer for \"4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de\" returns successfully" Jan 14 13:40:33.862213 kubelet[3459]: I0114 13:40:33.862132 3459 scope.go:117] "RemoveContainer" containerID="e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3" Jan 14 13:40:33.863171 containerd[1768]: time="2025-01-14T13:40:33.863124066Z" level=info msg="RemoveContainer for \"e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3\"" Jan 14 13:40:33.870913 containerd[1768]: time="2025-01-14T13:40:33.870880387Z" level=info msg="RemoveContainer for \"e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3\" returns successfully" Jan 14 13:40:33.871095 kubelet[3459]: I0114 13:40:33.871073 3459 scope.go:117] "RemoveContainer" containerID="88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b" Jan 14 13:40:33.872199 containerd[1768]: time="2025-01-14T13:40:33.872173508Z" level=info msg="RemoveContainer for \"88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b\"" Jan 14 13:40:33.879015 containerd[1768]: time="2025-01-14T13:40:33.878985989Z" level=info msg="RemoveContainer for \"88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b\" returns successfully" Jan 14 13:40:33.879279 kubelet[3459]: I0114 13:40:33.879230 3459 scope.go:117] "RemoveContainer" containerID="b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19" Jan 14 13:40:33.879692 containerd[1768]: time="2025-01-14T13:40:33.879601309Z" level=error msg="ContainerStatus for \"b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19\": not found" Jan 14 13:40:33.879796 kubelet[3459]: E0114 13:40:33.879745 3459 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19\": not found" containerID="b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19" Jan 14 13:40:33.879855 kubelet[3459]: I0114 13:40:33.879771 3459 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19"} err="failed to get container status \"b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19\": rpc error: code = NotFound desc = an error occurred when try to find container \"b52968c9089419198d4ffd40b471fa8afc93d63890660fefcf7b1ac696228a19\": not found" Jan 14 13:40:33.879855 kubelet[3459]: I0114 13:40:33.879837 3459 scope.go:117] "RemoveContainer" containerID="8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac" Jan 14 13:40:33.880273 containerd[1768]: time="2025-01-14T13:40:33.880059469Z" level=error msg="ContainerStatus for \"8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac\": not found" Jan 14 13:40:33.880342 kubelet[3459]: E0114 13:40:33.880177 3459 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac\": not found" containerID="8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac" Jan 14 13:40:33.880342 kubelet[3459]: I0114 13:40:33.880197 3459 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac"} err="failed to get container status \"8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"8cb2fa83b2248327315a94eca58aa1a2c84383314d8e610d0feb95c0a7f245ac\": not found" Jan 14 13:40:33.880342 kubelet[3459]: I0114 13:40:33.880211 3459 scope.go:117] "RemoveContainer" containerID="4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de" Jan 14 13:40:33.880437 containerd[1768]: time="2025-01-14T13:40:33.880371189Z" level=error msg="ContainerStatus for \"4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de\": not found" Jan 14 13:40:33.880513 kubelet[3459]: E0114 13:40:33.880483 3459 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de\": not found" containerID="4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de" Jan 14 13:40:33.880555 kubelet[3459]: I0114 13:40:33.880516 3459 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de"} err="failed to get container status \"4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f65ec849e828b72a7a74ce26a68c089f955ab8004bb7aad214d1106341652de\": not found" Jan 14 13:40:33.880555 kubelet[3459]: I0114 13:40:33.880540 3459 scope.go:117] "RemoveContainer" containerID="e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3" Jan 14 13:40:33.880769 containerd[1768]: time="2025-01-14T13:40:33.880737029Z" level=error msg="ContainerStatus for \"e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3\": not found" Jan 14 13:40:33.880916 kubelet[3459]: E0114 13:40:33.880872 3459 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3\": not found" containerID="e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3" Jan 14 13:40:33.880951 kubelet[3459]: I0114 13:40:33.880923 3459 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3"} err="failed to get container status \"e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"e246848c615fc71e10f033a8085f25d7b03c33dab1c48e8a9b5cdb3dca0c80a3\": not found" Jan 14 13:40:33.880951 kubelet[3459]: I0114 13:40:33.880942 3459 scope.go:117] "RemoveContainer" containerID="88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b" Jan 14 13:40:33.881183 containerd[1768]: time="2025-01-14T13:40:33.881103029Z" level=error msg="ContainerStatus for \"88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b\": not found" Jan 14 13:40:33.881240 kubelet[3459]: E0114 13:40:33.881207 3459 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b\": not found" containerID="88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b" Jan 14 13:40:33.881240 kubelet[3459]: I0114 13:40:33.881222 3459 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b"} err="failed to get container status \"88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b\": rpc error: code = NotFound desc = an error occurred when try to find container \"88f42023b75ac7e62068e9eadbc66e03544afa3960004b797813b7ebae53173b\": not found" Jan 14 13:40:33.881240 kubelet[3459]: I0114 13:40:33.881234 3459 scope.go:117] "RemoveContainer" containerID="b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e" Jan 14 13:40:33.882212 containerd[1768]: time="2025-01-14T13:40:33.882180389Z" level=info msg="RemoveContainer for \"b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e\"" Jan 14 13:40:33.889447 containerd[1768]: time="2025-01-14T13:40:33.889417230Z" level=info msg="RemoveContainer for \"b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e\" returns successfully" Jan 14 13:40:33.889625 kubelet[3459]: I0114 13:40:33.889574 3459 scope.go:117] "RemoveContainer" containerID="b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e" Jan 14 13:40:33.889948 containerd[1768]: time="2025-01-14T13:40:33.889782750Z" level=error msg="ContainerStatus for \"b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e\": not found" Jan 14 13:40:33.890010 kubelet[3459]: E0114 13:40:33.889902 3459 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e\": not found" containerID="b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e" Jan 14 13:40:33.890010 kubelet[3459]: I0114 13:40:33.889922 3459 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e"} err="failed to get container status \"b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5a5cee895dda1cef4f2f26a7731d3c9b17a4986eea90abdcae87d1c0d56dc7e\": not found" Jan 14 13:40:34.215807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76a588a73d1c3104ffb83324ff18a680bf1f6225d69c6375924237c48b418fb4-rootfs.mount: Deactivated successfully. Jan 14 13:40:34.215911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5618e23c3360ba1c80b65b91adf91407a04658f45ec7fc72c9b43e96c8b091d-rootfs.mount: Deactivated successfully. Jan 14 13:40:34.215967 systemd[1]: var-lib-kubelet-pods-25fbf88b\x2d86d0\x2d48e5\x2db9a5\x2d948785e2c45b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d57vfr.mount: Deactivated successfully. Jan 14 13:40:34.216019 systemd[1]: var-lib-kubelet-pods-25fbf88b\x2d86d0\x2d48e5\x2db9a5\x2d948785e2c45b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 14 13:40:34.216067 systemd[1]: var-lib-kubelet-pods-25fbf88b\x2d86d0\x2d48e5\x2db9a5\x2d948785e2c45b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 14 13:40:34.216115 systemd[1]: var-lib-kubelet-pods-17222547\x2d41e8\x2d4fc3\x2d862e\x2db19f632d5385-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbhwvb.mount: Deactivated successfully. Jan 14 13:40:34.319190 kubelet[3459]: I0114 13:40:34.319151 3459 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17222547-41e8-4fc3-862e-b19f632d5385" path="/var/lib/kubelet/pods/17222547-41e8-4fc3-862e-b19f632d5385/volumes" Jan 14 13:40:34.319586 kubelet[3459]: I0114 13:40:34.319562 3459 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25fbf88b-86d0-48e5-b9a5-948785e2c45b" path="/var/lib/kubelet/pods/25fbf88b-86d0-48e5-b9a5-948785e2c45b/volumes" Jan 14 13:40:34.444903 kubelet[3459]: E0114 13:40:34.444868 3459 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 13:40:35.246396 sshd[5067]: Connection closed by 10.200.16.10 port 45628 Jan 14 13:40:35.246921 sshd-session[5065]: pam_unix(sshd:session): session closed for user core Jan 14 13:40:35.249464 systemd[1]: sshd@23-10.200.20.12:22-10.200.16.10:45628.service: Deactivated successfully. Jan 14 13:40:35.251333 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 13:40:35.253124 systemd-logind[1740]: Session 26 logged out. Waiting for processes to exit. Jan 14 13:40:35.254591 systemd-logind[1740]: Removed session 26. Jan 14 13:40:35.331795 systemd[1]: Started sshd@24-10.200.20.12:22-10.200.16.10:45634.service - OpenSSH per-connection server daemon (10.200.16.10:45634). Jan 14 13:40:35.778121 sshd[5232]: Accepted publickey for core from 10.200.16.10 port 45634 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:40:35.779401 sshd-session[5232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:40:35.783156 systemd-logind[1740]: New session 27 of user core. Jan 14 13:40:35.791551 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 13:40:37.007052 kubelet[3459]: I0114 13:40:37.007001 3459 topology_manager.go:215] "Topology Admit Handler" podUID="6eb425df-f899-45b9-9c9f-c0f3f4d227a0" podNamespace="kube-system" podName="cilium-n62w6" Jan 14 13:40:37.007425 kubelet[3459]: E0114 13:40:37.007165 3459 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="25fbf88b-86d0-48e5-b9a5-948785e2c45b" containerName="cilium-agent" Jan 14 13:40:37.007425 kubelet[3459]: E0114 13:40:37.007179 3459 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="17222547-41e8-4fc3-862e-b19f632d5385" containerName="cilium-operator" Jan 14 13:40:37.007425 kubelet[3459]: E0114 13:40:37.007185 3459 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="25fbf88b-86d0-48e5-b9a5-948785e2c45b" containerName="mount-cgroup" Jan 14 13:40:37.007425 kubelet[3459]: E0114 13:40:37.007190 3459 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="25fbf88b-86d0-48e5-b9a5-948785e2c45b" containerName="clean-cilium-state" Jan 14 13:40:37.007425 kubelet[3459]: E0114 13:40:37.007196 3459 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="25fbf88b-86d0-48e5-b9a5-948785e2c45b" containerName="apply-sysctl-overwrites" Jan 14 13:40:37.007425 kubelet[3459]: E0114 13:40:37.007202 3459 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="25fbf88b-86d0-48e5-b9a5-948785e2c45b" containerName="mount-bpf-fs" Jan 14 13:40:37.007425 kubelet[3459]: I0114 13:40:37.007233 3459 memory_manager.go:354] "RemoveStaleState removing state" podUID="17222547-41e8-4fc3-862e-b19f632d5385" containerName="cilium-operator" Jan 14 13:40:37.007425 kubelet[3459]: I0114 13:40:37.007240 3459 memory_manager.go:354] "RemoveStaleState removing state" podUID="25fbf88b-86d0-48e5-b9a5-948785e2c45b" containerName="cilium-agent" Jan 14 13:40:37.016604 systemd[1]: Created slice kubepods-burstable-pod6eb425df_f899_45b9_9c9f_c0f3f4d227a0.slice - libcontainer container kubepods-burstable-pod6eb425df_f899_45b9_9c9f_c0f3f4d227a0.slice. Jan 14 13:40:37.017784 kubelet[3459]: I0114 13:40:37.017734 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6eb425df-f899-45b9-9c9f-c0f3f4d227a0-cilium-config-path\") pod \"cilium-n62w6\" (UID: \"6eb425df-f899-45b9-9c9f-c0f3f4d227a0\") " pod="kube-system/cilium-n62w6" Jan 14 13:40:37.017784 kubelet[3459]: I0114 13:40:37.017772 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6eb425df-f899-45b9-9c9f-c0f3f4d227a0-etc-cni-netd\") pod \"cilium-n62w6\" (UID: \"6eb425df-f899-45b9-9c9f-c0f3f4d227a0\") " pod="kube-system/cilium-n62w6" Jan 14 13:40:37.017871 kubelet[3459]: I0114 13:40:37.017791 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6eb425df-f899-45b9-9c9f-c0f3f4d227a0-cilium-ipsec-secrets\") pod \"cilium-n62w6\" (UID: \"6eb425df-f899-45b9-9c9f-c0f3f4d227a0\") " pod="kube-system/cilium-n62w6" Jan 14 13:40:37.017871 kubelet[3459]: I0114 13:40:37.017806 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6eb425df-f899-45b9-9c9f-c0f3f4d227a0-hubble-tls\") pod \"cilium-n62w6\" (UID: \"6eb425df-f899-45b9-9c9f-c0f3f4d227a0\") " pod="kube-system/cilium-n62w6" Jan 14 13:40:37.017871 kubelet[3459]: I0114 13:40:37.017822 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6eb425df-f899-45b9-9c9f-c0f3f4d227a0-bpf-maps\") pod \"cilium-n62w6\" (UID: \"6eb425df-f899-45b9-9c9f-c0f3f4d227a0\") " pod="kube-system/cilium-n62w6" Jan 14 13:40:37.017871 kubelet[3459]: I0114 13:40:37.017837 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6eb425df-f899-45b9-9c9f-c0f3f4d227a0-xtables-lock\") pod \"cilium-n62w6\" (UID: \"6eb425df-f899-45b9-9c9f-c0f3f4d227a0\") " pod="kube-system/cilium-n62w6" Jan 14 13:40:37.017871 kubelet[3459]: I0114 13:40:37.017852 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6eb425df-f899-45b9-9c9f-c0f3f4d227a0-host-proc-sys-kernel\") pod \"cilium-n62w6\" (UID: \"6eb425df-f899-45b9-9c9f-c0f3f4d227a0\") " pod="kube-system/cilium-n62w6" Jan 14 13:40:37.017871 kubelet[3459]: I0114 13:40:37.017866 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6eb425df-f899-45b9-9c9f-c0f3f4d227a0-cni-path\") pod \"cilium-n62w6\" (UID: \"6eb425df-f899-45b9-9c9f-c0f3f4d227a0\") " pod="kube-system/cilium-n62w6" Jan 14 13:40:37.018002 kubelet[3459]: I0114 13:40:37.017885 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6eb425df-f899-45b9-9c9f-c0f3f4d227a0-cilium-run\") pod \"cilium-n62w6\" (UID: \"6eb425df-f899-45b9-9c9f-c0f3f4d227a0\") " pod="kube-system/cilium-n62w6" Jan 14 13:40:37.018002 kubelet[3459]: I0114 13:40:37.017899 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6eb425df-f899-45b9-9c9f-c0f3f4d227a0-hostproc\") pod \"cilium-n62w6\" (UID: \"6eb425df-f899-45b9-9c9f-c0f3f4d227a0\") " pod="kube-system/cilium-n62w6" Jan 14 13:40:37.018002 kubelet[3459]: I0114 13:40:37.017912 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6eb425df-f899-45b9-9c9f-c0f3f4d227a0-cilium-cgroup\") pod \"cilium-n62w6\" (UID: \"6eb425df-f899-45b9-9c9f-c0f3f4d227a0\") " pod="kube-system/cilium-n62w6" Jan 14 13:40:37.018002 kubelet[3459]: I0114 13:40:37.017927 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6eb425df-f899-45b9-9c9f-c0f3f4d227a0-lib-modules\") pod \"cilium-n62w6\" (UID: \"6eb425df-f899-45b9-9c9f-c0f3f4d227a0\") " pod="kube-system/cilium-n62w6" Jan 14 13:40:37.018002 kubelet[3459]: I0114 13:40:37.017945 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn8rt\" (UniqueName: \"kubernetes.io/projected/6eb425df-f899-45b9-9c9f-c0f3f4d227a0-kube-api-access-gn8rt\") pod \"cilium-n62w6\" (UID: \"6eb425df-f899-45b9-9c9f-c0f3f4d227a0\") " pod="kube-system/cilium-n62w6" Jan 14 13:40:37.018002 kubelet[3459]: I0114 13:40:37.017962 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6eb425df-f899-45b9-9c9f-c0f3f4d227a0-clustermesh-secrets\") pod \"cilium-n62w6\" (UID: \"6eb425df-f899-45b9-9c9f-c0f3f4d227a0\") " pod="kube-system/cilium-n62w6" Jan 14 13:40:37.018124 kubelet[3459]: I0114 13:40:37.017978 3459 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6eb425df-f899-45b9-9c9f-c0f3f4d227a0-host-proc-sys-net\") pod \"cilium-n62w6\" (UID: \"6eb425df-f899-45b9-9c9f-c0f3f4d227a0\") " pod="kube-system/cilium-n62w6" Jan 14 13:40:37.075747 sshd[5234]: Connection closed by 10.200.16.10 port 45634 Jan 14 13:40:37.076436 sshd-session[5232]: pam_unix(sshd:session): session closed for user core Jan 14 13:40:37.081436 systemd-logind[1740]: Session 27 logged out. Waiting for processes to exit. Jan 14 13:40:37.081663 systemd[1]: sshd@24-10.200.20.12:22-10.200.16.10:45634.service: Deactivated successfully. Jan 14 13:40:37.084039 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 13:40:37.084953 systemd-logind[1740]: Removed session 27. Jan 14 13:40:37.160871 systemd[1]: Started sshd@25-10.200.20.12:22-10.200.16.10:54268.service - OpenSSH per-connection server daemon (10.200.16.10:54268). Jan 14 13:40:37.315590 kubelet[3459]: E0114 13:40:37.315418 3459 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-9hz4z" podUID="c355f697-b7d1-41ba-b2ae-922cdab4aa41" Jan 14 13:40:37.323861 containerd[1768]: time="2025-01-14T13:40:37.323798118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n62w6,Uid:6eb425df-f899-45b9-9c9f-c0f3f4d227a0,Namespace:kube-system,Attempt:0,}" Jan 14 13:40:37.362959 containerd[1768]: time="2025-01-14T13:40:37.362769485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:40:37.362959 containerd[1768]: time="2025-01-14T13:40:37.362895805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:40:37.362959 containerd[1768]: time="2025-01-14T13:40:37.362908565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:40:37.363188 containerd[1768]: time="2025-01-14T13:40:37.363012005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:40:37.383518 systemd[1]: Started cri-containerd-e43c08c2694ce3aff72e4b7ff4162e0bf33dd752a34fe804a35f721912bde5de.scope - libcontainer container e43c08c2694ce3aff72e4b7ff4162e0bf33dd752a34fe804a35f721912bde5de. Jan 14 13:40:37.402291 containerd[1768]: time="2025-01-14T13:40:37.402224211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n62w6,Uid:6eb425df-f899-45b9-9c9f-c0f3f4d227a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"e43c08c2694ce3aff72e4b7ff4162e0bf33dd752a34fe804a35f721912bde5de\"" Jan 14 13:40:37.406271 containerd[1768]: time="2025-01-14T13:40:37.406185772Z" level=info msg="CreateContainer within sandbox \"e43c08c2694ce3aff72e4b7ff4162e0bf33dd752a34fe804a35f721912bde5de\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 14 13:40:37.440533 containerd[1768]: time="2025-01-14T13:40:37.440483297Z" level=info msg="CreateContainer within sandbox \"e43c08c2694ce3aff72e4b7ff4162e0bf33dd752a34fe804a35f721912bde5de\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"54ea92577c2adb940067f48cc247c4c4c36071aa729d65c01375855eaa21bd18\"" Jan 14 13:40:37.441621 containerd[1768]: time="2025-01-14T13:40:37.441515298Z" level=info msg="StartContainer for \"54ea92577c2adb940067f48cc247c4c4c36071aa729d65c01375855eaa21bd18\"" Jan 14 13:40:37.463536 systemd[1]: Started cri-containerd-54ea92577c2adb940067f48cc247c4c4c36071aa729d65c01375855eaa21bd18.scope - libcontainer container 54ea92577c2adb940067f48cc247c4c4c36071aa729d65c01375855eaa21bd18. Jan 14 13:40:37.490384 containerd[1768]: time="2025-01-14T13:40:37.490217986Z" level=info msg="StartContainer for \"54ea92577c2adb940067f48cc247c4c4c36071aa729d65c01375855eaa21bd18\" returns successfully" Jan 14 13:40:37.496018 systemd[1]: cri-containerd-54ea92577c2adb940067f48cc247c4c4c36071aa729d65c01375855eaa21bd18.scope: Deactivated successfully. Jan 14 13:40:37.565114 containerd[1768]: time="2025-01-14T13:40:37.565047558Z" level=info msg="shim disconnected" id=54ea92577c2adb940067f48cc247c4c4c36071aa729d65c01375855eaa21bd18 namespace=k8s.io Jan 14 13:40:37.565114 containerd[1768]: time="2025-01-14T13:40:37.565107918Z" level=warning msg="cleaning up after shim disconnected" id=54ea92577c2adb940067f48cc247c4c4c36071aa729d65c01375855eaa21bd18 namespace=k8s.io Jan 14 13:40:37.565114 containerd[1768]: time="2025-01-14T13:40:37.565116918Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:40:37.574786 containerd[1768]: time="2025-01-14T13:40:37.574681759Z" level=warning msg="cleanup warnings time=\"2025-01-14T13:40:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 13:40:37.623766 sshd[5248]: Accepted publickey for core from 10.200.16.10 port 54268 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:40:37.625136 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:40:37.628746 systemd-logind[1740]: New session 28 of user core. Jan 14 13:40:37.635470 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 13:40:37.843063 containerd[1768]: time="2025-01-14T13:40:37.842709364Z" level=info msg="CreateContainer within sandbox \"e43c08c2694ce3aff72e4b7ff4162e0bf33dd752a34fe804a35f721912bde5de\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 14 13:40:37.874694 containerd[1768]: time="2025-01-14T13:40:37.874651529Z" level=info msg="CreateContainer within sandbox \"e43c08c2694ce3aff72e4b7ff4162e0bf33dd752a34fe804a35f721912bde5de\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"adbb8e756d77349671c01ddcf96ff79e4066b782d70d42674f9ceffeaef26e46\"" Jan 14 13:40:37.876073 containerd[1768]: time="2025-01-14T13:40:37.876049409Z" level=info msg="StartContainer for \"adbb8e756d77349671c01ddcf96ff79e4066b782d70d42674f9ceffeaef26e46\"" Jan 14 13:40:37.898512 systemd[1]: Started cri-containerd-adbb8e756d77349671c01ddcf96ff79e4066b782d70d42674f9ceffeaef26e46.scope - libcontainer container adbb8e756d77349671c01ddcf96ff79e4066b782d70d42674f9ceffeaef26e46. Jan 14 13:40:37.924965 containerd[1768]: time="2025-01-14T13:40:37.923983097Z" level=info msg="StartContainer for \"adbb8e756d77349671c01ddcf96ff79e4066b782d70d42674f9ceffeaef26e46\" returns successfully" Jan 14 13:40:37.928138 systemd[1]: cri-containerd-adbb8e756d77349671c01ddcf96ff79e4066b782d70d42674f9ceffeaef26e46.scope: Deactivated successfully. Jan 14 13:40:37.961420 sshd[5354]: Connection closed by 10.200.16.10 port 54268 Jan 14 13:40:37.962050 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Jan 14 13:40:37.963959 containerd[1768]: time="2025-01-14T13:40:37.963869424Z" level=info msg="shim disconnected" id=adbb8e756d77349671c01ddcf96ff79e4066b782d70d42674f9ceffeaef26e46 namespace=k8s.io Jan 14 13:40:37.963959 containerd[1768]: time="2025-01-14T13:40:37.963952704Z" level=warning msg="cleaning up after shim disconnected" id=adbb8e756d77349671c01ddcf96ff79e4066b782d70d42674f9ceffeaef26e46 namespace=k8s.io Jan 14 13:40:37.963959 containerd[1768]: time="2025-01-14T13:40:37.963961664Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:40:37.966102 systemd[1]: sshd@25-10.200.20.12:22-10.200.16.10:54268.service: Deactivated successfully. Jan 14 13:40:37.969014 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 13:40:37.970010 systemd-logind[1740]: Session 28 logged out. Waiting for processes to exit. Jan 14 13:40:37.971834 systemd-logind[1740]: Removed session 28. Jan 14 13:40:38.048590 systemd[1]: Started sshd@26-10.200.20.12:22-10.200.16.10:54274.service - OpenSSH per-connection server daemon (10.200.16.10:54274). Jan 14 13:40:38.494628 sshd[5421]: Accepted publickey for core from 10.200.16.10 port 54274 ssh2: RSA SHA256:AMUBWb04LkINjl6iymCQ58zI8KSkiZGdP88JbHPzCuU Jan 14 13:40:38.496193 sshd-session[5421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:40:38.500403 systemd-logind[1740]: New session 29 of user core. Jan 14 13:40:38.504494 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 13:40:38.627417 kubelet[3459]: I0114 13:40:38.627365 3459 setters.go:580] "Node became not ready" node="ci-4186.1.0-a-8a230934f7" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-14T13:40:38Z","lastTransitionTime":"2025-01-14T13:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 14 13:40:38.845902 containerd[1768]: time="2025-01-14T13:40:38.845693049Z" level=info msg="CreateContainer within sandbox \"e43c08c2694ce3aff72e4b7ff4162e0bf33dd752a34fe804a35f721912bde5de\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 14 13:40:38.869291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3897575057.mount: Deactivated successfully. Jan 14 13:40:38.881771 update_engine[1744]: I20250114 13:40:38.881696 1744 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 13:40:38.882134 update_engine[1744]: I20250114 13:40:38.881882 1744 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 13:40:38.882166 update_engine[1744]: I20250114 13:40:38.882145 1744 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 13:40:38.891729 containerd[1768]: time="2025-01-14T13:40:38.891683336Z" level=info msg="CreateContainer within sandbox \"e43c08c2694ce3aff72e4b7ff4162e0bf33dd752a34fe804a35f721912bde5de\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"360eb1244e67ae6783cabb88264acd0342e11df159dd1f90ebfe50168dbeb679\"" Jan 14 13:40:38.892597 containerd[1768]: time="2025-01-14T13:40:38.892565496Z" level=info msg="StartContainer for \"360eb1244e67ae6783cabb88264acd0342e11df159dd1f90ebfe50168dbeb679\"" Jan 14 13:40:38.919790 systemd[1]: Started cri-containerd-360eb1244e67ae6783cabb88264acd0342e11df159dd1f90ebfe50168dbeb679.scope - libcontainer container 360eb1244e67ae6783cabb88264acd0342e11df159dd1f90ebfe50168dbeb679. Jan 14 13:40:38.946868 systemd[1]: cri-containerd-360eb1244e67ae6783cabb88264acd0342e11df159dd1f90ebfe50168dbeb679.scope: Deactivated successfully. Jan 14 13:40:38.948854 containerd[1768]: time="2025-01-14T13:40:38.948798666Z" level=info msg="StartContainer for \"360eb1244e67ae6783cabb88264acd0342e11df159dd1f90ebfe50168dbeb679\" returns successfully" Jan 14 13:40:38.977209 update_engine[1744]: E20250114 13:40:38.977104 1744 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 14 13:40:38.977209 update_engine[1744]: I20250114 13:40:38.977185 1744 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 14 13:40:38.981143 containerd[1768]: time="2025-01-14T13:40:38.981021511Z" level=info msg="shim disconnected" id=360eb1244e67ae6783cabb88264acd0342e11df159dd1f90ebfe50168dbeb679 namespace=k8s.io Jan 14 13:40:38.981258 containerd[1768]: time="2025-01-14T13:40:38.981106711Z" level=warning msg="cleaning up after shim disconnected" id=360eb1244e67ae6783cabb88264acd0342e11df159dd1f90ebfe50168dbeb679 namespace=k8s.io Jan 14 13:40:38.981258 containerd[1768]: time="2025-01-14T13:40:38.981167191Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:40:39.123956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-360eb1244e67ae6783cabb88264acd0342e11df159dd1f90ebfe50168dbeb679-rootfs.mount: Deactivated successfully. Jan 14 13:40:39.315583 kubelet[3459]: E0114 13:40:39.315507 3459 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-9hz4z" podUID="c355f697-b7d1-41ba-b2ae-922cdab4aa41" Jan 14 13:40:39.446778 kubelet[3459]: E0114 13:40:39.446702 3459 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 13:40:39.851086 containerd[1768]: time="2025-01-14T13:40:39.850630174Z" level=info msg="CreateContainer within sandbox \"e43c08c2694ce3aff72e4b7ff4162e0bf33dd752a34fe804a35f721912bde5de\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 14 13:40:39.887111 containerd[1768]: time="2025-01-14T13:40:39.887026620Z" level=info msg="CreateContainer within sandbox \"e43c08c2694ce3aff72e4b7ff4162e0bf33dd752a34fe804a35f721912bde5de\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8ef8701098a89ec47d0f21d57d4f3c5c27ae6e4012ec492afa62eb2f779c235f\"" Jan 14 13:40:39.887923 containerd[1768]: time="2025-01-14T13:40:39.887718700Z" level=info msg="StartContainer for \"8ef8701098a89ec47d0f21d57d4f3c5c27ae6e4012ec492afa62eb2f779c235f\"" Jan 14 13:40:39.916520 systemd[1]: Started cri-containerd-8ef8701098a89ec47d0f21d57d4f3c5c27ae6e4012ec492afa62eb2f779c235f.scope - libcontainer container 8ef8701098a89ec47d0f21d57d4f3c5c27ae6e4012ec492afa62eb2f779c235f. Jan 14 13:40:39.938160 systemd[1]: cri-containerd-8ef8701098a89ec47d0f21d57d4f3c5c27ae6e4012ec492afa62eb2f779c235f.scope: Deactivated successfully. Jan 14 13:40:39.941801 containerd[1768]: time="2025-01-14T13:40:39.940447749Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eb425df_f899_45b9_9c9f_c0f3f4d227a0.slice/cri-containerd-8ef8701098a89ec47d0f21d57d4f3c5c27ae6e4012ec492afa62eb2f779c235f.scope/memory.events\": no such file or directory" Jan 14 13:40:39.948373 containerd[1768]: time="2025-01-14T13:40:39.948257790Z" level=info msg="StartContainer for \"8ef8701098a89ec47d0f21d57d4f3c5c27ae6e4012ec492afa62eb2f779c235f\" returns successfully" Jan 14 13:40:39.981217 containerd[1768]: time="2025-01-14T13:40:39.981021355Z" level=info msg="shim disconnected" id=8ef8701098a89ec47d0f21d57d4f3c5c27ae6e4012ec492afa62eb2f779c235f namespace=k8s.io Jan 14 13:40:39.981217 containerd[1768]: time="2025-01-14T13:40:39.981086515Z" level=warning msg="cleaning up after shim disconnected" id=8ef8701098a89ec47d0f21d57d4f3c5c27ae6e4012ec492afa62eb2f779c235f namespace=k8s.io Jan 14 13:40:39.981217 containerd[1768]: time="2025-01-14T13:40:39.981094955Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:40:40.123922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ef8701098a89ec47d0f21d57d4f3c5c27ae6e4012ec492afa62eb2f779c235f-rootfs.mount: Deactivated successfully. Jan 14 13:40:40.855809 containerd[1768]: time="2025-01-14T13:40:40.855544499Z" level=info msg="CreateContainer within sandbox \"e43c08c2694ce3aff72e4b7ff4162e0bf33dd752a34fe804a35f721912bde5de\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 14 13:40:40.895850 containerd[1768]: time="2025-01-14T13:40:40.895799506Z" level=info msg="CreateContainer within sandbox \"e43c08c2694ce3aff72e4b7ff4162e0bf33dd752a34fe804a35f721912bde5de\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ee2aff139aa4f9934e2b700e7150fbcd59e6f5562dcd7279a721cd36584b5aea\"" Jan 14 13:40:40.897542 containerd[1768]: time="2025-01-14T13:40:40.897496746Z" level=info msg="StartContainer for \"ee2aff139aa4f9934e2b700e7150fbcd59e6f5562dcd7279a721cd36584b5aea\"" Jan 14 13:40:40.924552 systemd[1]: Started cri-containerd-ee2aff139aa4f9934e2b700e7150fbcd59e6f5562dcd7279a721cd36584b5aea.scope - libcontainer container ee2aff139aa4f9934e2b700e7150fbcd59e6f5562dcd7279a721cd36584b5aea. Jan 14 13:40:40.954733 containerd[1768]: time="2025-01-14T13:40:40.954663796Z" level=info msg="StartContainer for \"ee2aff139aa4f9934e2b700e7150fbcd59e6f5562dcd7279a721cd36584b5aea\" returns successfully" Jan 14 13:40:41.316001 kubelet[3459]: E0114 13:40:41.315580 3459 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-9hz4z" podUID="c355f697-b7d1-41ba-b2ae-922cdab4aa41" Jan 14 13:40:41.328395 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 14 13:40:41.873445 kubelet[3459]: I0114 13:40:41.873262 3459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n62w6" podStartSLOduration=5.873238227 podStartE2EDuration="5.873238227s" podCreationTimestamp="2025-01-14 13:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:40:41.873172987 +0000 UTC m=+217.660789849" watchObservedRunningTime="2025-01-14 13:40:41.873238227 +0000 UTC m=+217.660855089" Jan 14 13:40:43.316682 kubelet[3459]: E0114 13:40:43.316191 3459 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-9hz4z" podUID="c355f697-b7d1-41ba-b2ae-922cdab4aa41" Jan 14 13:40:43.984492 systemd-networkd[1340]: lxc_health: Link UP Jan 14 13:40:43.998789 systemd-networkd[1340]: lxc_health: Gained carrier Jan 14 13:40:45.071191 systemd[1]: run-containerd-runc-k8s.io-ee2aff139aa4f9934e2b700e7150fbcd59e6f5562dcd7279a721cd36584b5aea-runc.257Myb.mount: Deactivated successfully. Jan 14 13:40:45.295513 systemd-networkd[1340]: lxc_health: Gained IPv6LL Jan 14 13:40:48.881069 update_engine[1744]: I20250114 13:40:48.881002 1744 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 13:40:48.881434 update_engine[1744]: I20250114 13:40:48.881222 1744 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 13:40:48.881519 update_engine[1744]: I20250114 13:40:48.881487 1744 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 13:40:48.924974 update_engine[1744]: E20250114 13:40:48.924922 1744 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 14 13:40:48.925068 update_engine[1744]: I20250114 13:40:48.925006 1744 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 14 13:40:51.573985 sshd[5423]: Connection closed by 10.200.16.10 port 54274 Jan 14 13:40:51.575564 sshd-session[5421]: pam_unix(sshd:session): session closed for user core Jan 14 13:40:51.578143 systemd[1]: sshd@26-10.200.20.12:22-10.200.16.10:54274.service: Deactivated successfully. Jan 14 13:40:51.580346 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 13:40:51.581703 systemd-logind[1740]: Session 29 logged out. Waiting for processes to exit. Jan 14 13:40:51.582787 systemd-logind[1740]: Removed session 29.