Jan 17 12:09:12.363754 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 12:09:12.363776 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 17 10:42:25 -00 2025 Jan 17 12:09:12.363785 kernel: KASLR enabled Jan 17 12:09:12.363791 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 17 12:09:12.363798 kernel: printk: bootconsole [pl11] enabled Jan 17 12:09:12.363804 kernel: efi: EFI v2.7 by EDK II Jan 17 12:09:12.363812 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 17 12:09:12.363819 kernel: random: crng init done Jan 17 12:09:12.363825 kernel: ACPI: Early table checksum verification disabled Jan 17 12:09:12.363831 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 17 12:09:12.363837 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:09:12.363844 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:09:12.363851 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 17 12:09:12.363858 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:09:12.363865 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:09:12.363872 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:09:12.363879 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:09:12.363887 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:09:12.363893 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:09:12.363900 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 17 12:09:12.363907 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:09:12.363913 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 17 12:09:12.363920 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 17 12:09:12.363927 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 17 12:09:12.363933 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 17 12:09:12.363940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 17 12:09:12.363947 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 17 12:09:12.363953 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 17 12:09:12.363961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 17 12:09:12.363968 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 17 12:09:12.363974 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 17 12:09:12.363981 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 17 12:09:12.363988 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 17 12:09:12.363994 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 17 12:09:12.364001 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 17 12:09:12.364007 kernel: Zone ranges: Jan 17 12:09:12.364014 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 17 12:09:12.364020 kernel: DMA32 empty Jan 17 12:09:12.364027 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 12:09:12.364034 kernel: Movable zone start for each node Jan 17 12:09:12.364044 kernel: Early memory node ranges Jan 17 12:09:12.364051 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 17 12:09:12.364058 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 17 12:09:12.364066 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 17 12:09:12.364073 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 17 12:09:12.364081 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 17 12:09:12.364088 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 17 12:09:12.364096 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 12:09:12.364103 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 17 12:09:12.364110 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 17 12:09:12.364117 kernel: psci: probing for conduit method from ACPI. Jan 17 12:09:12.364124 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 12:09:12.364131 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 12:09:12.364138 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 17 12:09:12.364145 kernel: psci: SMC Calling Convention v1.4 Jan 17 12:09:12.364152 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 17 12:09:12.364159 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 17 12:09:12.364167 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 17 12:09:12.370281 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 17 12:09:12.370293 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 12:09:12.370301 kernel: Detected PIPT I-cache on CPU0 Jan 17 12:09:12.370308 kernel: CPU features: detected: GIC system register CPU interface Jan 17 12:09:12.370315 kernel: CPU features: detected: Hardware dirty bit management Jan 17 12:09:12.370327 kernel: CPU features: detected: Spectre-BHB Jan 17 12:09:12.370337 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 12:09:12.370345 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 12:09:12.370352 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 12:09:12.370359 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 17 12:09:12.370371 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 12:09:12.370382 kernel: alternatives: applying boot alternatives Jan 17 12:09:12.370391 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:09:12.370399 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:09:12.370406 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:09:12.370414 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:09:12.370421 kernel: Fallback order for Node 0: 0 Jan 17 12:09:12.370431 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 17 12:09:12.370438 kernel: Policy zone: Normal Jan 17 12:09:12.370445 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:09:12.370452 kernel: software IO TLB: area num 2. Jan 17 12:09:12.370461 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 17 12:09:12.370472 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Jan 17 12:09:12.370479 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:09:12.370486 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:09:12.370493 kernel: rcu: RCU event tracing is enabled. Jan 17 12:09:12.370501 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:09:12.370511 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:09:12.370518 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:09:12.370525 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:09:12.370532 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:09:12.370539 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 12:09:12.370548 kernel: GICv3: 960 SPIs implemented Jan 17 12:09:12.370555 kernel: GICv3: 0 Extended SPIs implemented Jan 17 12:09:12.370565 kernel: Root IRQ handler: gic_handle_irq Jan 17 12:09:12.370572 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 17 12:09:12.370579 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 17 12:09:12.370586 kernel: ITS: No ITS available, not enabling LPIs Jan 17 12:09:12.370593 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:09:12.370600 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:09:12.370607 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 12:09:12.370615 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 12:09:12.370625 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 12:09:12.370634 kernel: Console: colour dummy device 80x25 Jan 17 12:09:12.370641 kernel: printk: console [tty1] enabled Jan 17 12:09:12.370649 kernel: ACPI: Core revision 20230628 Jan 17 12:09:12.370657 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 12:09:12.370664 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:09:12.370674 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:09:12.370682 kernel: landlock: Up and running. Jan 17 12:09:12.370689 kernel: SELinux: Initializing. Jan 17 12:09:12.370696 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:09:12.370704 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:09:12.370713 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:09:12.370723 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:09:12.370730 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 17 12:09:12.370738 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 17 12:09:12.370745 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 12:09:12.370752 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:09:12.370762 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:09:12.370777 kernel: Remapping and enabling EFI services. Jan 17 12:09:12.370785 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:09:12.370792 kernel: Detected PIPT I-cache on CPU1 Jan 17 12:09:12.370800 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 17 12:09:12.370809 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:09:12.370819 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 12:09:12.370827 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:09:12.370835 kernel: SMP: Total of 2 processors activated. Jan 17 12:09:12.370843 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 12:09:12.370852 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 17 12:09:12.370863 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 12:09:12.370870 kernel: CPU features: detected: CRC32 instructions Jan 17 12:09:12.370878 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 12:09:12.370886 kernel: CPU features: detected: LSE atomic instructions Jan 17 12:09:12.370893 kernel: CPU features: detected: Privileged Access Never Jan 17 12:09:12.370901 kernel: CPU: All CPU(s) started at EL1 Jan 17 12:09:12.370908 kernel: alternatives: applying system-wide alternatives Jan 17 12:09:12.370919 kernel: devtmpfs: initialized Jan 17 12:09:12.370928 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:09:12.370936 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:09:12.370943 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:09:12.370951 kernel: SMBIOS 3.1.0 present. Jan 17 12:09:12.370959 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 17 12:09:12.370966 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:09:12.370977 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 12:09:12.370985 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 12:09:12.370993 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 12:09:12.371002 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:09:12.371010 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 17 12:09:12.371018 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:09:12.371028 kernel: cpuidle: using governor menu Jan 17 12:09:12.371036 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 12:09:12.371043 kernel: ASID allocator initialised with 32768 entries Jan 17 12:09:12.371051 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:09:12.371059 kernel: Serial: AMBA PL011 UART driver Jan 17 12:09:12.371066 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 12:09:12.371076 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 12:09:12.371086 kernel: Modules: 509040 pages in range for PLT usage Jan 17 12:09:12.371094 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:09:12.371101 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:09:12.371109 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 12:09:12.371117 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 12:09:12.371124 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:09:12.371136 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:09:12.371144 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 12:09:12.371153 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 12:09:12.371161 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:09:12.371169 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:09:12.371185 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:09:12.371194 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:09:12.371201 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:09:12.371209 kernel: ACPI: Interpreter enabled Jan 17 12:09:12.371220 kernel: ACPI: Using GIC for interrupt routing Jan 17 12:09:12.371228 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 17 12:09:12.371237 kernel: printk: console [ttyAMA0] enabled Jan 17 12:09:12.371245 kernel: printk: bootconsole [pl11] disabled Jan 17 12:09:12.371253 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 17 12:09:12.371264 kernel: iommu: Default domain type: Translated Jan 17 12:09:12.371272 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 12:09:12.371279 kernel: efivars: Registered efivars operations Jan 17 12:09:12.371287 kernel: vgaarb: loaded Jan 17 12:09:12.371295 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 12:09:12.371302 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:09:12.371312 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:09:12.371319 kernel: pnp: PnP ACPI init Jan 17 12:09:12.371327 kernel: pnp: PnP ACPI: found 0 devices Jan 17 12:09:12.371338 kernel: NET: Registered PF_INET protocol family Jan 17 12:09:12.371346 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:09:12.371353 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:09:12.371361 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:09:12.371369 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:09:12.371377 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:09:12.371386 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:09:12.371396 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:09:12.371404 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:09:12.371412 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:09:12.371420 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:09:12.371427 kernel: kvm [1]: HYP mode not available Jan 17 12:09:12.371435 kernel: Initialise system trusted keyrings Jan 17 12:09:12.371442 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:09:12.371453 kernel: Key type asymmetric registered Jan 17 12:09:12.371462 kernel: Asymmetric key parser 'x509' registered Jan 17 12:09:12.371470 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 12:09:12.371477 kernel: io scheduler mq-deadline registered Jan 17 12:09:12.371485 kernel: io scheduler kyber registered Jan 17 12:09:12.371492 kernel: io scheduler bfq registered Jan 17 12:09:12.371500 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:09:12.371508 kernel: thunder_xcv, ver 1.0 Jan 17 12:09:12.371518 kernel: thunder_bgx, ver 1.0 Jan 17 12:09:12.371526 kernel: nicpf, ver 1.0 Jan 17 12:09:12.371533 kernel: nicvf, ver 1.0 Jan 17 12:09:12.371676 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 12:09:12.371749 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-17T12:09:11 UTC (1737115751) Jan 17 12:09:12.371760 kernel: efifb: probing for efifb Jan 17 12:09:12.371768 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 12:09:12.371775 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 12:09:12.371783 kernel: efifb: scrolling: redraw Jan 17 12:09:12.371791 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 12:09:12.371801 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:09:12.371809 kernel: fb0: EFI VGA frame buffer device Jan 17 12:09:12.371817 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 17 12:09:12.371824 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:09:12.371832 kernel: No ACPI PMU IRQ for CPU0 Jan 17 12:09:12.371839 kernel: No ACPI PMU IRQ for CPU1 Jan 17 12:09:12.371847 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 17 12:09:12.371855 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 12:09:12.371862 kernel: watchdog: Hard watchdog permanently disabled Jan 17 12:09:12.371872 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:09:12.371880 kernel: Segment Routing with IPv6 Jan 17 12:09:12.371887 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:09:12.371895 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:09:12.371903 kernel: Key type dns_resolver registered Jan 17 12:09:12.371910 kernel: registered taskstats version 1 Jan 17 12:09:12.371918 kernel: Loading compiled-in X.509 certificates Jan 17 12:09:12.371926 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e5b890cba32c3e1c766d9a9b821ee4d2154ffee7' Jan 17 12:09:12.371933 kernel: Key type .fscrypt registered Jan 17 12:09:12.371942 kernel: Key type fscrypt-provisioning registered Jan 17 12:09:12.371950 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:09:12.371958 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:09:12.371965 kernel: ima: No architecture policies found Jan 17 12:09:12.371973 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 12:09:12.371981 kernel: clk: Disabling unused clocks Jan 17 12:09:12.371988 kernel: Freeing unused kernel memory: 39360K Jan 17 12:09:12.371996 kernel: Run /init as init process Jan 17 12:09:12.372003 kernel: with arguments: Jan 17 12:09:12.372012 kernel: /init Jan 17 12:09:12.372020 kernel: with environment: Jan 17 12:09:12.372027 kernel: HOME=/ Jan 17 12:09:12.372035 kernel: TERM=linux Jan 17 12:09:12.372042 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:09:12.372052 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:09:12.372061 systemd[1]: Detected virtualization microsoft. Jan 17 12:09:12.372070 systemd[1]: Detected architecture arm64. Jan 17 12:09:12.372079 systemd[1]: Running in initrd. Jan 17 12:09:12.372087 systemd[1]: No hostname configured, using default hostname. Jan 17 12:09:12.372095 systemd[1]: Hostname set to . Jan 17 12:09:12.372103 systemd[1]: Initializing machine ID from random generator. Jan 17 12:09:12.372111 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:09:12.372119 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:09:12.372133 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:09:12.372142 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:09:12.372152 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:09:12.372160 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:09:12.372169 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:09:12.372291 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:09:12.372300 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:09:12.372309 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:09:12.372317 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:09:12.372329 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:09:12.372337 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:09:12.372346 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:09:12.372354 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:09:12.372362 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:09:12.372370 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:09:12.372379 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:09:12.372387 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:09:12.372397 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:09:12.372405 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:09:12.372413 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:09:12.372422 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:09:12.372430 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:09:12.372439 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:09:12.372447 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:09:12.372455 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:09:12.372464 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:09:12.372473 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:09:12.372503 systemd-journald[217]: Collecting audit messages is disabled. Jan 17 12:09:12.372523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:12.372532 systemd-journald[217]: Journal started Jan 17 12:09:12.372553 systemd-journald[217]: Runtime Journal (/run/log/journal/0735c57c5a4141cc8c519834ce0c551e) is 8.0M, max 78.5M, 70.5M free. Jan 17 12:09:12.373142 systemd-modules-load[218]: Inserted module 'overlay' Jan 17 12:09:12.406951 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:09:12.406992 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:09:12.413604 kernel: Bridge firewalling registered Jan 17 12:09:12.417584 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:09:12.420796 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 17 12:09:12.431365 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:09:12.444525 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:09:12.457613 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:09:12.469159 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:12.493372 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:09:12.502310 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:09:12.518861 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:09:12.533323 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:09:12.558787 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:09:12.574295 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:09:12.588449 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:09:12.605143 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:09:12.633711 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:09:12.647355 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:09:12.667870 dracut-cmdline[249]: dracut-dracut-053 Jan 17 12:09:12.682439 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:09:12.670336 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:09:12.732965 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:09:12.740751 systemd-resolved[253]: Positive Trust Anchors: Jan 17 12:09:12.740760 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:09:12.740791 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:09:12.743504 systemd-resolved[253]: Defaulting to hostname 'linux'. Jan 17 12:09:12.744473 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:09:12.757020 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:09:12.844195 kernel: SCSI subsystem initialized Jan 17 12:09:12.853190 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:09:12.863189 kernel: iscsi: registered transport (tcp) Jan 17 12:09:12.881370 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:09:12.881436 kernel: QLogic iSCSI HBA Driver Jan 17 12:09:12.915922 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:09:12.931512 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:09:12.966105 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:09:12.966162 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:09:12.973210 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:09:13.022197 kernel: raid6: neonx8 gen() 15763 MB/s Jan 17 12:09:13.042186 kernel: raid6: neonx4 gen() 15662 MB/s Jan 17 12:09:13.062185 kernel: raid6: neonx2 gen() 13230 MB/s Jan 17 12:09:13.083186 kernel: raid6: neonx1 gen() 10480 MB/s Jan 17 12:09:13.103184 kernel: raid6: int64x8 gen() 6959 MB/s Jan 17 12:09:13.123184 kernel: raid6: int64x4 gen() 7349 MB/s Jan 17 12:09:13.144186 kernel: raid6: int64x2 gen() 6130 MB/s Jan 17 12:09:13.168971 kernel: raid6: int64x1 gen() 5058 MB/s Jan 17 12:09:13.168990 kernel: raid6: using algorithm neonx8 gen() 15763 MB/s Jan 17 12:09:13.194432 kernel: raid6: .... xor() 11937 MB/s, rmw enabled Jan 17 12:09:13.194459 kernel: raid6: using neon recovery algorithm Jan 17 12:09:13.208021 kernel: xor: measuring software checksum speed Jan 17 12:09:13.208048 kernel: 8regs : 19783 MB/sec Jan 17 12:09:13.211944 kernel: 32regs : 19603 MB/sec Jan 17 12:09:13.220694 kernel: arm64_neon : 24968 MB/sec Jan 17 12:09:13.220706 kernel: xor: using function: arm64_neon (24968 MB/sec) Jan 17 12:09:13.272201 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:09:13.282429 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:09:13.300317 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:09:13.325520 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 17 12:09:13.332804 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:09:13.351354 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:09:13.368522 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Jan 17 12:09:13.395019 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:09:13.412359 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:09:13.460043 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:09:13.481522 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:09:13.514505 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:09:13.527940 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:09:13.552701 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:09:13.568146 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:09:13.584186 kernel: hv_vmbus: Vmbus version:5.3 Jan 17 12:09:13.585342 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:09:13.610888 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 12:09:13.610917 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 12:09:13.610281 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:09:13.630188 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 17 12:09:13.633769 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:09:13.669497 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 17 12:09:13.669534 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 12:09:13.669545 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 12:09:13.677714 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 12:09:13.669852 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:09:13.700020 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 12:09:13.700044 kernel: PTP clock support registered Jan 17 12:09:13.700055 kernel: scsi host0: storvsc_host_t Jan 17 12:09:13.687394 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:09:13.714854 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 12:09:13.725585 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:09:13.743050 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 12:09:13.743079 kernel: scsi host1: storvsc_host_t Jan 17 12:09:13.743379 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 17 12:09:13.725857 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:13.748763 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:13.774609 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:13.804816 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:13.850939 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 12:09:13.850962 kernel: hv_vmbus: registering driver hv_utils Jan 17 12:09:13.850972 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 12:09:14.276240 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 12:09:14.276255 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 12:09:14.276266 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:09:14.276276 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 12:09:14.276286 kernel: hv_netvsc 002248b9-7d95-0022-48b9-7d95002248b9 eth0: VF slot 1 added Jan 17 12:09:14.276405 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 12:09:14.273908 systemd-resolved[253]: Clock change detected. Flushing caches. Jan 17 12:09:14.294280 kernel: hv_vmbus: registering driver hv_pci Jan 17 12:09:14.276980 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:09:14.323063 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 12:09:14.366324 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 12:09:14.366441 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 12:09:14.366552 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 12:09:14.366672 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 12:09:14.366803 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:09:14.366817 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 12:09:14.366942 kernel: hv_pci 98d6d79c-bcb1-4ad3-bd88-e205bfe97783: PCI VMBus probing: Using version 0x10004 Jan 17 12:09:14.504153 kernel: hv_pci 98d6d79c-bcb1-4ad3-bd88-e205bfe97783: PCI host bridge to bus bcb1:00 Jan 17 12:09:14.504321 kernel: pci_bus bcb1:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 17 12:09:14.504420 kernel: pci_bus bcb1:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 12:09:14.504501 kernel: pci bcb1:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 17 12:09:14.504598 kernel: pci bcb1:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 12:09:14.504686 kernel: pci bcb1:00:02.0: enabling Extended Tags Jan 17 12:09:14.504774 kernel: pci bcb1:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at bcb1:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 17 12:09:14.504887 kernel: pci_bus bcb1:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 12:09:14.504971 kernel: pci bcb1:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 12:09:14.326907 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:09:14.549087 kernel: mlx5_core bcb1:00:02.0: enabling device (0000 -> 0002) Jan 17 12:09:14.766564 kernel: mlx5_core bcb1:00:02.0: firmware version: 16.30.1284 Jan 17 12:09:14.766687 kernel: hv_netvsc 002248b9-7d95-0022-48b9-7d95002248b9 eth0: VF registering: eth1 Jan 17 12:09:14.766778 kernel: mlx5_core bcb1:00:02.0 eth1: joined to eth0 Jan 17 12:09:14.766900 kernel: mlx5_core bcb1:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 17 12:09:14.775819 kernel: mlx5_core bcb1:00:02.0 enP48305s1: renamed from eth1 Jan 17 12:09:14.887431 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 12:09:15.026728 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 12:09:15.047814 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (492) Jan 17 12:09:15.061223 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 12:09:15.086820 kernel: BTRFS: device fsid 8c8354db-e4b6-4022-87e4-d06cc74d2d9f devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (493) Jan 17 12:09:15.099212 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 12:09:15.106522 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 12:09:15.136024 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:09:15.159855 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:09:15.167815 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:09:16.176818 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:09:16.177854 disk-uuid[598]: The operation has completed successfully. Jan 17 12:09:16.234520 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:09:16.234614 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:09:16.266933 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:09:16.280752 sh[684]: Success Jan 17 12:09:16.324837 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 12:09:16.530255 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:09:16.551051 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:09:16.558839 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:09:16.589342 kernel: BTRFS info (device dm-0): first mount of filesystem 8c8354db-e4b6-4022-87e4-d06cc74d2d9f Jan 17 12:09:16.589395 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:09:16.596285 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:09:16.601620 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:09:16.605920 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:09:17.031125 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:09:17.036713 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:09:17.063038 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:09:17.070950 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:09:17.107624 kernel: BTRFS info (device sda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:09:17.107670 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:09:17.112399 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:09:17.133836 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:09:17.141659 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:09:17.153234 kernel: BTRFS info (device sda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:09:17.159684 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:09:17.174938 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:09:17.184817 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:09:17.203972 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:09:17.232282 systemd-networkd[868]: lo: Link UP Jan 17 12:09:17.232295 systemd-networkd[868]: lo: Gained carrier Jan 17 12:09:17.234296 systemd-networkd[868]: Enumeration completed Jan 17 12:09:17.235116 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:09:17.235119 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:09:17.236149 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:09:17.250612 systemd[1]: Reached target network.target - Network. Jan 17 12:09:17.290822 kernel: mlx5_core bcb1:00:02.0 enP48305s1: Link up Jan 17 12:09:17.328816 kernel: hv_netvsc 002248b9-7d95-0022-48b9-7d95002248b9 eth0: Data path switched to VF: enP48305s1 Jan 17 12:09:17.329693 systemd-networkd[868]: enP48305s1: Link UP Jan 17 12:09:17.330004 systemd-networkd[868]: eth0: Link UP Jan 17 12:09:17.330377 systemd-networkd[868]: eth0: Gained carrier Jan 17 12:09:17.330386 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:09:17.355301 systemd-networkd[868]: enP48305s1: Gained carrier Jan 17 12:09:17.369828 systemd-networkd[868]: eth0: DHCPv4 address 10.200.20.31/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 12:09:18.224449 ignition[866]: Ignition 2.19.0 Jan 17 12:09:18.224460 ignition[866]: Stage: fetch-offline Jan 17 12:09:18.227174 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:09:18.224514 ignition[866]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:18.224523 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:09:18.224614 ignition[866]: parsed url from cmdline: "" Jan 17 12:09:18.252081 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:09:18.224617 ignition[866]: no config URL provided Jan 17 12:09:18.224622 ignition[866]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:09:18.224628 ignition[866]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:09:18.224633 ignition[866]: failed to fetch config: resource requires networking Jan 17 12:09:18.224821 ignition[866]: Ignition finished successfully Jan 17 12:09:18.287451 ignition[878]: Ignition 2.19.0 Jan 17 12:09:18.287457 ignition[878]: Stage: fetch Jan 17 12:09:18.287637 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:18.287650 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:09:18.287743 ignition[878]: parsed url from cmdline: "" Jan 17 12:09:18.287746 ignition[878]: no config URL provided Jan 17 12:09:18.287751 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:09:18.287757 ignition[878]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:09:18.287777 ignition[878]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 12:09:18.391378 ignition[878]: GET result: OK Jan 17 12:09:18.391491 ignition[878]: config has been read from IMDS userdata Jan 17 12:09:18.391549 ignition[878]: parsing config with SHA512: 3b8d9acb002d678e68255b9772079af4dbb079a565cb9d945ed5fd38080d8453443f5f7d73603f319981b7beb0871d2ff77e6f2a65c610c80da7b5ee0f78cbfc Jan 17 12:09:18.395730 unknown[878]: fetched base config from "system" Jan 17 12:09:18.396192 ignition[878]: fetch: fetch complete Jan 17 12:09:18.395737 unknown[878]: fetched base config from "system" Jan 17 12:09:18.396197 ignition[878]: fetch: fetch passed Jan 17 12:09:18.395742 unknown[878]: fetched user config from "azure" Jan 17 12:09:18.396241 ignition[878]: Ignition finished successfully Jan 17 12:09:18.402479 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:09:18.423964 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:09:18.441549 ignition[885]: Ignition 2.19.0 Jan 17 12:09:18.447258 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:09:18.441557 ignition[885]: Stage: kargs Jan 17 12:09:18.464936 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:09:18.441730 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:18.480614 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:09:18.441739 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:09:18.487038 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:09:18.442886 ignition[885]: kargs: kargs passed Jan 17 12:09:18.497968 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:09:18.442933 ignition[885]: Ignition finished successfully Jan 17 12:09:18.510177 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:09:18.477530 ignition[891]: Ignition 2.19.0 Jan 17 12:09:18.521580 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:09:18.477537 ignition[891]: Stage: disks Jan 17 12:09:18.533480 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:09:18.477732 ignition[891]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:18.561017 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:09:18.477741 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:09:18.479470 ignition[891]: disks: disks passed Jan 17 12:09:18.479513 ignition[891]: Ignition finished successfully Jan 17 12:09:18.660750 systemd-fsck[899]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 12:09:18.669398 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:09:18.686005 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:09:18.747850 kernel: EXT4-fs (sda9): mounted filesystem 5d516319-3144-49e6-9760-d0f29faba535 r/w with ordered data mode. Quota mode: none. Jan 17 12:09:18.747163 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:09:18.752275 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:09:18.771012 systemd-networkd[868]: eth0: Gained IPv6LL Jan 17 12:09:18.797938 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:09:18.808408 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:09:18.817952 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 12:09:18.824851 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:09:18.876931 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (910) Jan 17 12:09:18.876955 kernel: BTRFS info (device sda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:09:18.824886 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:09:18.850299 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:09:18.890048 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:09:18.911826 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:09:18.911852 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:09:18.911862 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:09:18.918293 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:09:19.282955 systemd-networkd[868]: enP48305s1: Gained IPv6LL Jan 17 12:09:19.455348 coreos-metadata[912]: Jan 17 12:09:19.455 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 12:09:19.464697 coreos-metadata[912]: Jan 17 12:09:19.464 INFO Fetch successful Jan 17 12:09:19.464697 coreos-metadata[912]: Jan 17 12:09:19.464 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 12:09:19.482996 coreos-metadata[912]: Jan 17 12:09:19.482 INFO Fetch successful Jan 17 12:09:19.497378 coreos-metadata[912]: Jan 17 12:09:19.497 INFO wrote hostname ci-4081.3.0-a-4140a712f6 to /sysroot/etc/hostname Jan 17 12:09:19.498660 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:09:19.775719 initrd-setup-root[939]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:09:19.815287 initrd-setup-root[946]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:09:19.821763 initrd-setup-root[953]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:09:19.839703 initrd-setup-root[960]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:09:20.744469 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:09:20.763087 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:09:20.771975 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:09:20.794528 kernel: BTRFS info (device sda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:09:20.794185 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:09:20.822956 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:09:20.833915 ignition[1027]: INFO : Ignition 2.19.0 Jan 17 12:09:20.833915 ignition[1027]: INFO : Stage: mount Jan 17 12:09:20.833915 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:20.833915 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:09:20.862140 ignition[1027]: INFO : mount: mount passed Jan 17 12:09:20.862140 ignition[1027]: INFO : Ignition finished successfully Jan 17 12:09:20.836106 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:09:20.854966 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:09:20.896001 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:09:20.925773 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1039) Jan 17 12:09:20.925823 kernel: BTRFS info (device sda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:09:20.936823 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:09:20.936874 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:09:20.945831 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:09:20.946960 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:09:20.974625 ignition[1057]: INFO : Ignition 2.19.0 Jan 17 12:09:20.974625 ignition[1057]: INFO : Stage: files Jan 17 12:09:20.983469 ignition[1057]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:20.983469 ignition[1057]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:09:20.983469 ignition[1057]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:09:20.983469 ignition[1057]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:09:20.983469 ignition[1057]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:09:21.054170 ignition[1057]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:09:21.062610 ignition[1057]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:09:21.062610 ignition[1057]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:09:21.062610 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:09:21.062610 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 17 12:09:21.055283 unknown[1057]: wrote ssh authorized keys file for user: core Jan 17 12:09:21.144972 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:09:21.364691 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:09:21.364691 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:09:21.387171 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 17 12:09:21.807314 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:09:21.873446 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:09:21.873446 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:09:21.873446 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:09:21.873446 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:09:21.873446 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:09:21.873446 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:09:21.873446 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:09:21.873446 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:09:21.873446 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:09:21.966951 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:09:21.966951 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:09:21.966951 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 17 12:09:21.966951 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 17 12:09:21.966951 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 17 12:09:21.966951 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 17 12:09:22.274542 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:09:22.444902 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 17 12:09:22.444902 ignition[1057]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 12:09:22.474807 ignition[1057]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:09:22.486867 ignition[1057]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:09:22.486867 ignition[1057]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 12:09:22.486867 ignition[1057]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:09:22.486867 ignition[1057]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:09:22.486867 ignition[1057]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:09:22.486867 ignition[1057]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:09:22.486867 ignition[1057]: INFO : files: files passed Jan 17 12:09:22.486867 ignition[1057]: INFO : Ignition finished successfully Jan 17 12:09:22.487698 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:09:22.540077 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:09:22.559994 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:09:22.575101 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:09:22.625989 initrd-setup-root-after-ignition[1083]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:09:22.625989 initrd-setup-root-after-ignition[1083]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:09:22.575195 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:09:22.652001 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:09:22.604492 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:09:22.613240 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:09:22.649963 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:09:22.684509 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:09:22.684615 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:09:22.696857 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:09:22.708288 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:09:22.721041 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:09:22.736966 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:09:22.762700 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:09:22.782041 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:09:22.801216 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:09:22.801333 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:09:22.816137 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:09:22.828779 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:09:22.842218 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:09:22.853857 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:09:22.853942 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:09:22.870883 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:09:22.877037 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:09:22.888980 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:09:22.901553 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:09:22.913310 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:09:22.925734 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:09:22.938035 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:09:22.951222 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:09:22.963036 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:09:22.975454 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:09:22.985743 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:09:22.985826 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:09:23.001942 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:09:23.014088 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:09:23.027373 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:09:23.027419 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:09:23.041281 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:09:23.041349 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:09:23.060763 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:09:23.060820 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:09:23.076050 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:09:23.076095 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:09:23.087776 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 12:09:23.087824 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:09:23.156295 ignition[1109]: INFO : Ignition 2.19.0 Jan 17 12:09:23.156295 ignition[1109]: INFO : Stage: umount Jan 17 12:09:23.156295 ignition[1109]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:23.156295 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:09:23.156295 ignition[1109]: INFO : umount: umount passed Jan 17 12:09:23.156295 ignition[1109]: INFO : Ignition finished successfully Jan 17 12:09:23.123031 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:09:23.139728 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:09:23.139851 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:09:23.154913 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:09:23.164153 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:09:23.164235 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:09:23.175492 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:09:23.175532 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:09:23.208757 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:09:23.208881 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:09:23.227565 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:09:23.227652 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:09:23.240618 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:09:23.240669 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:09:23.254125 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:09:23.254170 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:09:23.265736 systemd[1]: Stopped target network.target - Network. Jan 17 12:09:23.277202 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:09:23.277264 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:09:23.291668 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:09:23.302438 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:09:23.305820 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:09:23.315713 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:09:23.326832 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:09:23.345929 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:09:23.345982 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:09:23.362228 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:09:23.362273 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:09:23.369173 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:09:23.369236 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:09:23.381276 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:09:23.381359 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:09:23.401319 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:09:23.419209 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:09:23.424811 systemd-networkd[868]: eth0: DHCPv6 lease lost Jan 17 12:09:23.440734 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:09:23.441344 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:09:23.442824 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:09:23.453102 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:09:23.455037 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:09:23.475011 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:09:23.475073 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:09:23.503975 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:09:23.686450 kernel: hv_netvsc 002248b9-7d95-0022-48b9-7d95002248b9 eth0: Data path switched from VF: enP48305s1 Jan 17 12:09:23.513507 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:09:23.513568 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:09:23.526711 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:09:23.526769 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:09:23.538976 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:09:23.539022 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:09:23.551754 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:09:23.551808 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:09:23.565266 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:09:23.605212 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:09:23.605345 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:09:23.626016 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:09:23.626090 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:09:23.637670 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:09:23.637702 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:09:23.649723 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:09:23.649769 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:09:23.680900 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:09:23.680971 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:09:23.698328 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:09:23.698386 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:09:23.744118 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:09:23.760855 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:09:23.760934 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:09:23.777006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:09:23.777062 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:23.790291 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:09:23.790389 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:09:23.805030 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:09:23.805136 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:09:23.854814 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:09:23.854933 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:09:23.863680 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:09:23.875525 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:09:23.875582 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:09:23.913009 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:09:23.951618 systemd[1]: Switching root. Jan 17 12:09:24.049112 systemd-journald[217]: Journal stopped Jan 17 12:09:12.363754 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 12:09:12.363776 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 17 10:42:25 -00 2025 Jan 17 12:09:12.363785 kernel: KASLR enabled Jan 17 12:09:12.363791 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 17 12:09:12.363798 kernel: printk: bootconsole [pl11] enabled Jan 17 12:09:12.363804 kernel: efi: EFI v2.7 by EDK II Jan 17 12:09:12.363812 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 17 12:09:12.363819 kernel: random: crng init done Jan 17 12:09:12.363825 kernel: ACPI: Early table checksum verification disabled Jan 17 12:09:12.363831 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 17 12:09:12.363837 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:09:12.363844 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:09:12.363851 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 17 12:09:12.363858 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:09:12.363865 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:09:12.363872 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:09:12.363879 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:09:12.363887 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:09:12.363893 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:09:12.363900 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 17 12:09:12.363907 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:09:12.363913 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 17 12:09:12.363920 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 17 12:09:12.363927 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 17 12:09:12.363933 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 17 12:09:12.363940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 17 12:09:12.363947 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 17 12:09:12.363953 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 17 12:09:12.363961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 17 12:09:12.363968 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 17 12:09:12.363974 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 17 12:09:12.363981 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 17 12:09:12.363988 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 17 12:09:12.363994 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 17 12:09:12.364001 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 17 12:09:12.364007 kernel: Zone ranges: Jan 17 12:09:12.364014 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 17 12:09:12.364020 kernel: DMA32 empty Jan 17 12:09:12.364027 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 12:09:12.364034 kernel: Movable zone start for each node Jan 17 12:09:12.364044 kernel: Early memory node ranges Jan 17 12:09:12.364051 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 17 12:09:12.364058 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 17 12:09:12.364066 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 17 12:09:12.364073 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 17 12:09:12.364081 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 17 12:09:12.364088 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 17 12:09:12.364096 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 12:09:12.364103 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 17 12:09:12.364110 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 17 12:09:12.364117 kernel: psci: probing for conduit method from ACPI. Jan 17 12:09:12.364124 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 12:09:12.364131 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 12:09:12.364138 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 17 12:09:12.364145 kernel: psci: SMC Calling Convention v1.4 Jan 17 12:09:12.364152 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 17 12:09:12.364159 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 17 12:09:12.364167 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 17 12:09:12.370281 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 17 12:09:12.370293 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 12:09:12.370301 kernel: Detected PIPT I-cache on CPU0 Jan 17 12:09:12.370308 kernel: CPU features: detected: GIC system register CPU interface Jan 17 12:09:12.370315 kernel: CPU features: detected: Hardware dirty bit management Jan 17 12:09:12.370327 kernel: CPU features: detected: Spectre-BHB Jan 17 12:09:12.370337 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 12:09:12.370345 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 12:09:12.370352 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 12:09:12.370359 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 17 12:09:12.370371 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 12:09:12.370382 kernel: alternatives: applying boot alternatives Jan 17 12:09:12.370391 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:09:12.370399 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:09:12.370406 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:09:12.370414 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:09:12.370421 kernel: Fallback order for Node 0: 0 Jan 17 12:09:12.370431 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 17 12:09:12.370438 kernel: Policy zone: Normal Jan 17 12:09:12.370445 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:09:12.370452 kernel: software IO TLB: area num 2. Jan 17 12:09:12.370461 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 17 12:09:12.370472 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Jan 17 12:09:12.370479 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:09:12.370486 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:09:12.370493 kernel: rcu: RCU event tracing is enabled. Jan 17 12:09:12.370501 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:09:12.370511 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:09:12.370518 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:09:12.370525 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:09:12.370532 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:09:12.370539 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 12:09:12.370548 kernel: GICv3: 960 SPIs implemented Jan 17 12:09:12.370555 kernel: GICv3: 0 Extended SPIs implemented Jan 17 12:09:12.370565 kernel: Root IRQ handler: gic_handle_irq Jan 17 12:09:12.370572 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 17 12:09:12.370579 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 17 12:09:12.370586 kernel: ITS: No ITS available, not enabling LPIs Jan 17 12:09:12.370593 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:09:12.370600 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:09:12.370607 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 12:09:12.370615 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 12:09:12.370625 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 12:09:12.370634 kernel: Console: colour dummy device 80x25 Jan 17 12:09:12.370641 kernel: printk: console [tty1] enabled Jan 17 12:09:12.370649 kernel: ACPI: Core revision 20230628 Jan 17 12:09:12.370657 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 12:09:12.370664 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:09:12.370674 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:09:12.370682 kernel: landlock: Up and running. Jan 17 12:09:12.370689 kernel: SELinux: Initializing. Jan 17 12:09:12.370696 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:09:12.370704 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:09:12.370713 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:09:12.370723 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:09:12.370730 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 17 12:09:12.370738 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 17 12:09:12.370745 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 12:09:12.370752 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:09:12.370762 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:09:12.370777 kernel: Remapping and enabling EFI services. Jan 17 12:09:12.370785 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:09:12.370792 kernel: Detected PIPT I-cache on CPU1 Jan 17 12:09:12.370800 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 17 12:09:12.370809 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:09:12.370819 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 12:09:12.370827 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:09:12.370835 kernel: SMP: Total of 2 processors activated. Jan 17 12:09:12.370843 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 12:09:12.370852 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 17 12:09:12.370863 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 12:09:12.370870 kernel: CPU features: detected: CRC32 instructions Jan 17 12:09:12.370878 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 12:09:12.370886 kernel: CPU features: detected: LSE atomic instructions Jan 17 12:09:12.370893 kernel: CPU features: detected: Privileged Access Never Jan 17 12:09:12.370901 kernel: CPU: All CPU(s) started at EL1 Jan 17 12:09:12.370908 kernel: alternatives: applying system-wide alternatives Jan 17 12:09:12.370919 kernel: devtmpfs: initialized Jan 17 12:09:12.370928 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:09:12.370936 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:09:12.370943 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:09:12.370951 kernel: SMBIOS 3.1.0 present. Jan 17 12:09:12.370959 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 17 12:09:12.370966 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:09:12.370977 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 12:09:12.370985 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 12:09:12.370993 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 12:09:12.371002 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:09:12.371010 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 17 12:09:12.371018 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:09:12.371028 kernel: cpuidle: using governor menu Jan 17 12:09:12.371036 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 12:09:12.371043 kernel: ASID allocator initialised with 32768 entries Jan 17 12:09:12.371051 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:09:12.371059 kernel: Serial: AMBA PL011 UART driver Jan 17 12:09:12.371066 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 12:09:12.371076 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 12:09:12.371086 kernel: Modules: 509040 pages in range for PLT usage Jan 17 12:09:12.371094 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:09:12.371101 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:09:12.371109 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 12:09:12.371117 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 12:09:12.371124 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:09:12.371136 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:09:12.371144 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 12:09:12.371153 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 12:09:12.371161 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:09:12.371169 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:09:12.371185 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:09:12.371194 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:09:12.371201 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:09:12.371209 kernel: ACPI: Interpreter enabled Jan 17 12:09:12.371220 kernel: ACPI: Using GIC for interrupt routing Jan 17 12:09:12.371228 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 17 12:09:12.371237 kernel: printk: console [ttyAMA0] enabled Jan 17 12:09:12.371245 kernel: printk: bootconsole [pl11] disabled Jan 17 12:09:12.371253 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 17 12:09:12.371264 kernel: iommu: Default domain type: Translated Jan 17 12:09:12.371272 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 12:09:12.371279 kernel: efivars: Registered efivars operations Jan 17 12:09:12.371287 kernel: vgaarb: loaded Jan 17 12:09:12.371295 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 12:09:12.371302 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:09:12.371312 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:09:12.371319 kernel: pnp: PnP ACPI init Jan 17 12:09:12.371327 kernel: pnp: PnP ACPI: found 0 devices Jan 17 12:09:12.371338 kernel: NET: Registered PF_INET protocol family Jan 17 12:09:12.371346 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:09:12.371353 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:09:12.371361 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:09:12.371369 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:09:12.371377 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:09:12.371386 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:09:12.371396 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:09:12.371404 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:09:12.371412 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:09:12.371420 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:09:12.371427 kernel: kvm [1]: HYP mode not available Jan 17 12:09:12.371435 kernel: Initialise system trusted keyrings Jan 17 12:09:12.371442 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:09:12.371453 kernel: Key type asymmetric registered Jan 17 12:09:12.371462 kernel: Asymmetric key parser 'x509' registered Jan 17 12:09:12.371470 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 12:09:12.371477 kernel: io scheduler mq-deadline registered Jan 17 12:09:12.371485 kernel: io scheduler kyber registered Jan 17 12:09:12.371492 kernel: io scheduler bfq registered Jan 17 12:09:12.371500 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:09:12.371508 kernel: thunder_xcv, ver 1.0 Jan 17 12:09:12.371518 kernel: thunder_bgx, ver 1.0 Jan 17 12:09:12.371526 kernel: nicpf, ver 1.0 Jan 17 12:09:12.371533 kernel: nicvf, ver 1.0 Jan 17 12:09:12.371676 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 12:09:12.371749 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-17T12:09:11 UTC (1737115751) Jan 17 12:09:12.371760 kernel: efifb: probing for efifb Jan 17 12:09:12.371768 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 12:09:12.371775 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 12:09:12.371783 kernel: efifb: scrolling: redraw Jan 17 12:09:12.371791 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 12:09:12.371801 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:09:12.371809 kernel: fb0: EFI VGA frame buffer device Jan 17 12:09:12.371817 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 17 12:09:12.371824 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:09:12.371832 kernel: No ACPI PMU IRQ for CPU0 Jan 17 12:09:12.371839 kernel: No ACPI PMU IRQ for CPU1 Jan 17 12:09:12.371847 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 17 12:09:12.371855 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 12:09:12.371862 kernel: watchdog: Hard watchdog permanently disabled Jan 17 12:09:12.371872 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:09:12.371880 kernel: Segment Routing with IPv6 Jan 17 12:09:12.371887 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:09:12.371895 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:09:12.371903 kernel: Key type dns_resolver registered Jan 17 12:09:12.371910 kernel: registered taskstats version 1 Jan 17 12:09:12.371918 kernel: Loading compiled-in X.509 certificates Jan 17 12:09:12.371926 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e5b890cba32c3e1c766d9a9b821ee4d2154ffee7' Jan 17 12:09:12.371933 kernel: Key type .fscrypt registered Jan 17 12:09:12.371942 kernel: Key type fscrypt-provisioning registered Jan 17 12:09:12.371950 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:09:12.371958 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:09:12.371965 kernel: ima: No architecture policies found Jan 17 12:09:12.371973 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 12:09:12.371981 kernel: clk: Disabling unused clocks Jan 17 12:09:12.371988 kernel: Freeing unused kernel memory: 39360K Jan 17 12:09:12.371996 kernel: Run /init as init process Jan 17 12:09:12.372003 kernel: with arguments: Jan 17 12:09:12.372012 kernel: /init Jan 17 12:09:12.372020 kernel: with environment: Jan 17 12:09:12.372027 kernel: HOME=/ Jan 17 12:09:12.372035 kernel: TERM=linux Jan 17 12:09:12.372042 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:09:12.372052 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:09:12.372061 systemd[1]: Detected virtualization microsoft. Jan 17 12:09:12.372070 systemd[1]: Detected architecture arm64. Jan 17 12:09:12.372079 systemd[1]: Running in initrd. Jan 17 12:09:12.372087 systemd[1]: No hostname configured, using default hostname. Jan 17 12:09:12.372095 systemd[1]: Hostname set to . Jan 17 12:09:12.372103 systemd[1]: Initializing machine ID from random generator. Jan 17 12:09:12.372111 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:09:12.372119 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:09:12.372133 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:09:12.372142 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:09:12.372152 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:09:12.372160 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:09:12.372169 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:09:12.372291 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:09:12.372300 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:09:12.372309 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:09:12.372317 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:09:12.372329 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:09:12.372337 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:09:12.372346 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:09:12.372354 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:09:12.372362 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:09:12.372370 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:09:12.372379 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:09:12.372387 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:09:12.372397 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:09:12.372405 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:09:12.372413 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:09:12.372422 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:09:12.372430 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:09:12.372439 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:09:12.372447 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:09:12.372455 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:09:12.372464 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:09:12.372473 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:09:12.372503 systemd-journald[217]: Collecting audit messages is disabled. Jan 17 12:09:12.372523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:12.372532 systemd-journald[217]: Journal started Jan 17 12:09:12.372553 systemd-journald[217]: Runtime Journal (/run/log/journal/0735c57c5a4141cc8c519834ce0c551e) is 8.0M, max 78.5M, 70.5M free. Jan 17 12:09:12.373142 systemd-modules-load[218]: Inserted module 'overlay' Jan 17 12:09:12.406951 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:09:12.406992 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:09:12.413604 kernel: Bridge firewalling registered Jan 17 12:09:12.417584 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:09:12.420796 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 17 12:09:12.431365 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:09:12.444525 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:09:12.457613 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:09:12.469159 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:12.493372 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:09:12.502310 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:09:12.518861 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:09:12.533323 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:09:12.558787 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:09:12.574295 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:09:12.588449 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:09:12.605143 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:09:12.633711 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:09:12.647355 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:09:12.667870 dracut-cmdline[249]: dracut-dracut-053 Jan 17 12:09:12.682439 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:09:12.670336 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:09:12.732965 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:09:12.740751 systemd-resolved[253]: Positive Trust Anchors: Jan 17 12:09:12.740760 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:09:12.740791 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:09:12.743504 systemd-resolved[253]: Defaulting to hostname 'linux'. Jan 17 12:09:12.744473 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:09:12.757020 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:09:12.844195 kernel: SCSI subsystem initialized Jan 17 12:09:12.853190 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:09:12.863189 kernel: iscsi: registered transport (tcp) Jan 17 12:09:12.881370 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:09:12.881436 kernel: QLogic iSCSI HBA Driver Jan 17 12:09:12.915922 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:09:12.931512 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:09:12.966105 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:09:12.966162 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:09:12.973210 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:09:13.022197 kernel: raid6: neonx8 gen() 15763 MB/s Jan 17 12:09:13.042186 kernel: raid6: neonx4 gen() 15662 MB/s Jan 17 12:09:13.062185 kernel: raid6: neonx2 gen() 13230 MB/s Jan 17 12:09:13.083186 kernel: raid6: neonx1 gen() 10480 MB/s Jan 17 12:09:13.103184 kernel: raid6: int64x8 gen() 6959 MB/s Jan 17 12:09:13.123184 kernel: raid6: int64x4 gen() 7349 MB/s Jan 17 12:09:13.144186 kernel: raid6: int64x2 gen() 6130 MB/s Jan 17 12:09:13.168971 kernel: raid6: int64x1 gen() 5058 MB/s Jan 17 12:09:13.168990 kernel: raid6: using algorithm neonx8 gen() 15763 MB/s Jan 17 12:09:13.194432 kernel: raid6: .... xor() 11937 MB/s, rmw enabled Jan 17 12:09:13.194459 kernel: raid6: using neon recovery algorithm Jan 17 12:09:13.208021 kernel: xor: measuring software checksum speed Jan 17 12:09:13.208048 kernel: 8regs : 19783 MB/sec Jan 17 12:09:13.211944 kernel: 32regs : 19603 MB/sec Jan 17 12:09:13.220694 kernel: arm64_neon : 24968 MB/sec Jan 17 12:09:13.220706 kernel: xor: using function: arm64_neon (24968 MB/sec) Jan 17 12:09:13.272201 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:09:13.282429 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:09:13.300317 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:09:13.325520 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 17 12:09:13.332804 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:09:13.351354 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:09:13.368522 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Jan 17 12:09:13.395019 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:09:13.412359 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:09:13.460043 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:09:13.481522 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:09:13.514505 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:09:13.527940 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:09:13.552701 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:09:13.568146 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:09:13.584186 kernel: hv_vmbus: Vmbus version:5.3 Jan 17 12:09:13.585342 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:09:13.610888 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 12:09:13.610917 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 12:09:13.610281 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:09:13.630188 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 17 12:09:13.633769 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:09:13.669497 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 17 12:09:13.669534 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 12:09:13.669545 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 12:09:13.677714 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 12:09:13.669852 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:09:13.700020 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 12:09:13.700044 kernel: PTP clock support registered Jan 17 12:09:13.700055 kernel: scsi host0: storvsc_host_t Jan 17 12:09:13.687394 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:09:13.714854 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 12:09:13.725585 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:09:13.743050 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 12:09:13.743079 kernel: scsi host1: storvsc_host_t Jan 17 12:09:13.743379 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 17 12:09:13.725857 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:13.748763 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:13.774609 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:13.804816 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:13.850939 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 12:09:13.850962 kernel: hv_vmbus: registering driver hv_utils Jan 17 12:09:13.850972 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 12:09:14.276240 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 12:09:14.276255 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 12:09:14.276266 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:09:14.276276 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 12:09:14.276286 kernel: hv_netvsc 002248b9-7d95-0022-48b9-7d95002248b9 eth0: VF slot 1 added Jan 17 12:09:14.276405 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 12:09:14.273908 systemd-resolved[253]: Clock change detected. Flushing caches. Jan 17 12:09:14.294280 kernel: hv_vmbus: registering driver hv_pci Jan 17 12:09:14.276980 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:09:14.323063 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 12:09:14.366324 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 12:09:14.366441 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 12:09:14.366552 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 12:09:14.366672 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 12:09:14.366803 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:09:14.366817 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 12:09:14.366942 kernel: hv_pci 98d6d79c-bcb1-4ad3-bd88-e205bfe97783: PCI VMBus probing: Using version 0x10004 Jan 17 12:09:14.504153 kernel: hv_pci 98d6d79c-bcb1-4ad3-bd88-e205bfe97783: PCI host bridge to bus bcb1:00 Jan 17 12:09:14.504321 kernel: pci_bus bcb1:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 17 12:09:14.504420 kernel: pci_bus bcb1:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 12:09:14.504501 kernel: pci bcb1:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 17 12:09:14.504598 kernel: pci bcb1:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 12:09:14.504686 kernel: pci bcb1:00:02.0: enabling Extended Tags Jan 17 12:09:14.504774 kernel: pci bcb1:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at bcb1:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 17 12:09:14.504887 kernel: pci_bus bcb1:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 12:09:14.504971 kernel: pci bcb1:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 12:09:14.326907 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:09:14.549087 kernel: mlx5_core bcb1:00:02.0: enabling device (0000 -> 0002) Jan 17 12:09:14.766564 kernel: mlx5_core bcb1:00:02.0: firmware version: 16.30.1284 Jan 17 12:09:14.766687 kernel: hv_netvsc 002248b9-7d95-0022-48b9-7d95002248b9 eth0: VF registering: eth1 Jan 17 12:09:14.766778 kernel: mlx5_core bcb1:00:02.0 eth1: joined to eth0 Jan 17 12:09:14.766900 kernel: mlx5_core bcb1:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 17 12:09:14.775819 kernel: mlx5_core bcb1:00:02.0 enP48305s1: renamed from eth1 Jan 17 12:09:14.887431 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 12:09:15.026728 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 12:09:15.047814 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (492) Jan 17 12:09:15.061223 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 12:09:15.086820 kernel: BTRFS: device fsid 8c8354db-e4b6-4022-87e4-d06cc74d2d9f devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (493) Jan 17 12:09:15.099212 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 12:09:15.106522 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 12:09:15.136024 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:09:15.159855 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:09:15.167815 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:09:16.176818 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:09:16.177854 disk-uuid[598]: The operation has completed successfully. Jan 17 12:09:16.234520 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:09:16.234614 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:09:16.266933 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:09:16.280752 sh[684]: Success Jan 17 12:09:16.324837 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 12:09:16.530255 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:09:16.551051 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:09:16.558839 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:09:16.589342 kernel: BTRFS info (device dm-0): first mount of filesystem 8c8354db-e4b6-4022-87e4-d06cc74d2d9f Jan 17 12:09:16.589395 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:09:16.596285 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:09:16.601620 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:09:16.605920 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:09:17.031125 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:09:17.036713 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:09:17.063038 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:09:17.070950 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:09:17.107624 kernel: BTRFS info (device sda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:09:17.107670 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:09:17.112399 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:09:17.133836 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:09:17.141659 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:09:17.153234 kernel: BTRFS info (device sda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:09:17.159684 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:09:17.174938 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:09:17.184817 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:09:17.203972 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:09:17.232282 systemd-networkd[868]: lo: Link UP Jan 17 12:09:17.232295 systemd-networkd[868]: lo: Gained carrier Jan 17 12:09:17.234296 systemd-networkd[868]: Enumeration completed Jan 17 12:09:17.235116 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:09:17.235119 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:09:17.236149 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:09:17.250612 systemd[1]: Reached target network.target - Network. Jan 17 12:09:17.290822 kernel: mlx5_core bcb1:00:02.0 enP48305s1: Link up Jan 17 12:09:17.328816 kernel: hv_netvsc 002248b9-7d95-0022-48b9-7d95002248b9 eth0: Data path switched to VF: enP48305s1 Jan 17 12:09:17.329693 systemd-networkd[868]: enP48305s1: Link UP Jan 17 12:09:17.330004 systemd-networkd[868]: eth0: Link UP Jan 17 12:09:17.330377 systemd-networkd[868]: eth0: Gained carrier Jan 17 12:09:17.330386 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:09:17.355301 systemd-networkd[868]: enP48305s1: Gained carrier Jan 17 12:09:17.369828 systemd-networkd[868]: eth0: DHCPv4 address 10.200.20.31/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 12:09:18.224449 ignition[866]: Ignition 2.19.0 Jan 17 12:09:18.224460 ignition[866]: Stage: fetch-offline Jan 17 12:09:18.227174 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:09:18.224514 ignition[866]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:18.224523 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:09:18.224614 ignition[866]: parsed url from cmdline: "" Jan 17 12:09:18.252081 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:09:18.224617 ignition[866]: no config URL provided Jan 17 12:09:18.224622 ignition[866]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:09:18.224628 ignition[866]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:09:18.224633 ignition[866]: failed to fetch config: resource requires networking Jan 17 12:09:18.224821 ignition[866]: Ignition finished successfully Jan 17 12:09:18.287451 ignition[878]: Ignition 2.19.0 Jan 17 12:09:18.287457 ignition[878]: Stage: fetch Jan 17 12:09:18.287637 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:18.287650 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:09:18.287743 ignition[878]: parsed url from cmdline: "" Jan 17 12:09:18.287746 ignition[878]: no config URL provided Jan 17 12:09:18.287751 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:09:18.287757 ignition[878]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:09:18.287777 ignition[878]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 12:09:18.391378 ignition[878]: GET result: OK Jan 17 12:09:18.391491 ignition[878]: config has been read from IMDS userdata Jan 17 12:09:18.391549 ignition[878]: parsing config with SHA512: 3b8d9acb002d678e68255b9772079af4dbb079a565cb9d945ed5fd38080d8453443f5f7d73603f319981b7beb0871d2ff77e6f2a65c610c80da7b5ee0f78cbfc Jan 17 12:09:18.395730 unknown[878]: fetched base config from "system" Jan 17 12:09:18.396192 ignition[878]: fetch: fetch complete Jan 17 12:09:18.395737 unknown[878]: fetched base config from "system" Jan 17 12:09:18.396197 ignition[878]: fetch: fetch passed Jan 17 12:09:18.395742 unknown[878]: fetched user config from "azure" Jan 17 12:09:18.396241 ignition[878]: Ignition finished successfully Jan 17 12:09:18.402479 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:09:18.423964 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:09:18.441549 ignition[885]: Ignition 2.19.0 Jan 17 12:09:18.447258 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:09:18.441557 ignition[885]: Stage: kargs Jan 17 12:09:18.464936 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:09:18.441730 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:18.480614 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:09:18.441739 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:09:18.487038 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:09:18.442886 ignition[885]: kargs: kargs passed Jan 17 12:09:18.497968 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:09:18.442933 ignition[885]: Ignition finished successfully Jan 17 12:09:18.510177 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:09:18.477530 ignition[891]: Ignition 2.19.0 Jan 17 12:09:18.521580 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:09:18.477537 ignition[891]: Stage: disks Jan 17 12:09:18.533480 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:09:18.477732 ignition[891]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:18.561017 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:09:18.477741 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:09:18.479470 ignition[891]: disks: disks passed Jan 17 12:09:18.479513 ignition[891]: Ignition finished successfully Jan 17 12:09:18.660750 systemd-fsck[899]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 12:09:18.669398 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:09:18.686005 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:09:18.747850 kernel: EXT4-fs (sda9): mounted filesystem 5d516319-3144-49e6-9760-d0f29faba535 r/w with ordered data mode. Quota mode: none. Jan 17 12:09:18.747163 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:09:18.752275 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:09:18.771012 systemd-networkd[868]: eth0: Gained IPv6LL Jan 17 12:09:18.797938 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:09:18.808408 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:09:18.817952 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 12:09:18.824851 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:09:18.876931 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (910) Jan 17 12:09:18.876955 kernel: BTRFS info (device sda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:09:18.824886 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:09:18.850299 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:09:18.890048 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:09:18.911826 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:09:18.911852 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:09:18.911862 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:09:18.918293 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:09:19.282955 systemd-networkd[868]: enP48305s1: Gained IPv6LL Jan 17 12:09:19.455348 coreos-metadata[912]: Jan 17 12:09:19.455 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 12:09:19.464697 coreos-metadata[912]: Jan 17 12:09:19.464 INFO Fetch successful Jan 17 12:09:19.464697 coreos-metadata[912]: Jan 17 12:09:19.464 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 12:09:19.482996 coreos-metadata[912]: Jan 17 12:09:19.482 INFO Fetch successful Jan 17 12:09:19.497378 coreos-metadata[912]: Jan 17 12:09:19.497 INFO wrote hostname ci-4081.3.0-a-4140a712f6 to /sysroot/etc/hostname Jan 17 12:09:19.498660 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:09:19.775719 initrd-setup-root[939]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:09:19.815287 initrd-setup-root[946]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:09:19.821763 initrd-setup-root[953]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:09:19.839703 initrd-setup-root[960]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:09:20.744469 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:09:20.763087 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:09:20.771975 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:09:20.794528 kernel: BTRFS info (device sda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:09:20.794185 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:09:20.822956 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:09:20.833915 ignition[1027]: INFO : Ignition 2.19.0 Jan 17 12:09:20.833915 ignition[1027]: INFO : Stage: mount Jan 17 12:09:20.833915 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:20.833915 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:09:20.862140 ignition[1027]: INFO : mount: mount passed Jan 17 12:09:20.862140 ignition[1027]: INFO : Ignition finished successfully Jan 17 12:09:20.836106 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:09:20.854966 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:09:20.896001 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:09:20.925773 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1039) Jan 17 12:09:20.925823 kernel: BTRFS info (device sda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:09:20.936823 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:09:20.936874 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:09:20.945831 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:09:20.946960 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:09:20.974625 ignition[1057]: INFO : Ignition 2.19.0 Jan 17 12:09:20.974625 ignition[1057]: INFO : Stage: files Jan 17 12:09:20.983469 ignition[1057]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:20.983469 ignition[1057]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:09:20.983469 ignition[1057]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:09:20.983469 ignition[1057]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:09:20.983469 ignition[1057]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:09:21.054170 ignition[1057]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:09:21.062610 ignition[1057]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:09:21.062610 ignition[1057]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:09:21.062610 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:09:21.062610 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 17 12:09:21.055283 unknown[1057]: wrote ssh authorized keys file for user: core Jan 17 12:09:21.144972 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:09:21.364691 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:09:21.364691 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:09:21.387171 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 17 12:09:21.807314 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:09:21.873446 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:09:21.873446 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:09:21.873446 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:09:21.873446 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:09:21.873446 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:09:21.873446 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:09:21.873446 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:09:21.873446 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:09:21.873446 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:09:21.966951 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:09:21.966951 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:09:21.966951 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 17 12:09:21.966951 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 17 12:09:21.966951 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 17 12:09:21.966951 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 17 12:09:22.274542 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:09:22.444902 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 17 12:09:22.444902 ignition[1057]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 12:09:22.474807 ignition[1057]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:09:22.486867 ignition[1057]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:09:22.486867 ignition[1057]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 12:09:22.486867 ignition[1057]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:09:22.486867 ignition[1057]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:09:22.486867 ignition[1057]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:09:22.486867 ignition[1057]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:09:22.486867 ignition[1057]: INFO : files: files passed Jan 17 12:09:22.486867 ignition[1057]: INFO : Ignition finished successfully Jan 17 12:09:22.487698 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:09:22.540077 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:09:22.559994 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:09:22.575101 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:09:22.625989 initrd-setup-root-after-ignition[1083]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:09:22.625989 initrd-setup-root-after-ignition[1083]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:09:22.575195 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:09:22.652001 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:09:22.604492 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:09:22.613240 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:09:22.649963 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:09:22.684509 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:09:22.684615 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:09:22.696857 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:09:22.708288 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:09:22.721041 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:09:22.736966 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:09:22.762700 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:09:22.782041 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:09:22.801216 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:09:22.801333 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:09:22.816137 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:09:22.828779 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:09:22.842218 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:09:22.853857 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:09:22.853942 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:09:22.870883 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:09:22.877037 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:09:22.888980 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:09:22.901553 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:09:22.913310 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:09:22.925734 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:09:22.938035 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:09:22.951222 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:09:22.963036 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:09:22.975454 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:09:22.985743 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:09:22.985826 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:09:23.001942 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:09:23.014088 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:09:23.027373 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:09:23.027419 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:09:23.041281 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:09:23.041349 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:09:23.060763 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:09:23.060820 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:09:23.076050 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:09:23.076095 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:09:23.087776 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 12:09:23.087824 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:09:23.156295 ignition[1109]: INFO : Ignition 2.19.0 Jan 17 12:09:23.156295 ignition[1109]: INFO : Stage: umount Jan 17 12:09:23.156295 ignition[1109]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:23.156295 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:09:23.156295 ignition[1109]: INFO : umount: umount passed Jan 17 12:09:23.156295 ignition[1109]: INFO : Ignition finished successfully Jan 17 12:09:23.123031 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:09:23.139728 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:09:23.139851 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:09:23.154913 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:09:23.164153 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:09:23.164235 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:09:23.175492 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:09:23.175532 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:09:23.208757 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:09:23.208881 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:09:23.227565 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:09:23.227652 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:09:23.240618 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:09:23.240669 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:09:23.254125 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:09:23.254170 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:09:23.265736 systemd[1]: Stopped target network.target - Network. Jan 17 12:09:23.277202 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:09:23.277264 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:09:23.291668 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:09:23.302438 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:09:23.305820 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:09:23.315713 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:09:23.326832 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:09:23.345929 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:09:23.345982 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:09:23.362228 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:09:23.362273 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:09:23.369173 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:09:23.369236 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:09:23.381276 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:09:23.381359 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:09:23.401319 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:09:23.419209 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:09:23.424811 systemd-networkd[868]: eth0: DHCPv6 lease lost Jan 17 12:09:23.440734 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:09:23.441344 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:09:23.442824 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:09:23.453102 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:09:23.455037 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:09:23.475011 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:09:23.475073 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:09:23.503975 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:09:23.686450 kernel: hv_netvsc 002248b9-7d95-0022-48b9-7d95002248b9 eth0: Data path switched from VF: enP48305s1 Jan 17 12:09:23.513507 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:09:23.513568 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:09:23.526711 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:09:23.526769 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:09:23.538976 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:09:23.539022 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:09:23.551754 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:09:23.551808 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:09:23.565266 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:09:23.605212 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:09:23.605345 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:09:23.626016 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:09:23.626090 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:09:23.637670 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:09:23.637702 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:09:23.649723 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:09:23.649769 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:09:23.680900 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:09:23.680971 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:09:23.698328 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:09:23.698386 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:09:23.744118 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:09:23.760855 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:09:23.760934 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:09:23.777006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:09:23.777062 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:23.790291 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:09:23.790389 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:09:23.805030 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:09:23.805136 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:09:23.854814 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:09:23.854933 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:09:23.863680 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:09:23.875525 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:09:23.875582 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:09:23.913009 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:09:23.951618 systemd[1]: Switching root. Jan 17 12:09:24.049112 systemd-journald[217]: Journal stopped Jan 17 12:09:28.735141 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 17 12:09:28.735164 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:09:28.735175 kernel: SELinux: policy capability open_perms=1 Jan 17 12:09:28.735185 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:09:28.735193 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:09:28.735202 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:09:28.735210 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:09:28.735218 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:09:28.735226 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:09:28.735235 systemd[1]: Successfully loaded SELinux policy in 178.569ms. Jan 17 12:09:28.735246 kernel: audit: type=1403 audit(1737115765.405:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:09:28.735254 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.327ms. Jan 17 12:09:28.735264 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:09:28.735273 systemd[1]: Detected virtualization microsoft. Jan 17 12:09:28.735283 systemd[1]: Detected architecture arm64. Jan 17 12:09:28.735293 systemd[1]: Detected first boot. Jan 17 12:09:28.735302 systemd[1]: Hostname set to . Jan 17 12:09:28.735311 systemd[1]: Initializing machine ID from random generator. Jan 17 12:09:28.735320 zram_generator::config[1150]: No configuration found. Jan 17 12:09:28.735329 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:09:28.735338 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:09:28.735349 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:09:28.735358 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:09:28.735367 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:09:28.735376 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:09:28.735386 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:09:28.735395 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:09:28.735405 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:09:28.735415 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:09:28.735424 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:09:28.735434 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:09:28.735443 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:09:28.735452 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:09:28.735461 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:09:28.735470 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:09:28.735479 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:09:28.735489 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:09:28.735499 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 17 12:09:28.735509 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:09:28.735518 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:09:28.735529 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:09:28.735539 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:09:28.735548 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:09:28.735557 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:09:28.735568 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:09:28.735577 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:09:28.735587 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:09:28.735597 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:09:28.735630 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:09:28.735639 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:09:28.735649 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:09:28.735660 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:09:28.735670 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:09:28.735679 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:09:28.735689 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:09:28.735698 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:09:28.735708 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:09:28.735719 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:09:28.735728 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:09:28.735738 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:09:28.735747 systemd[1]: Reached target machines.target - Containers. Jan 17 12:09:28.735757 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:09:28.735766 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:09:28.735776 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:09:28.735785 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:09:28.735802 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:09:28.735813 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:09:28.735823 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:09:28.735833 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:09:28.735843 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:09:28.735853 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:09:28.735862 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:09:28.735872 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:09:28.735881 kernel: loop: module loaded Jan 17 12:09:28.735892 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:09:28.735902 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:09:28.735911 kernel: fuse: init (API version 7.39) Jan 17 12:09:28.735920 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:09:28.735929 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:09:28.735939 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:09:28.735962 systemd-journald[1253]: Collecting audit messages is disabled. Jan 17 12:09:28.735984 systemd-journald[1253]: Journal started Jan 17 12:09:28.736004 systemd-journald[1253]: Runtime Journal (/run/log/journal/d47eb41c8dcf4e059601fbdc21ca4046) is 8.0M, max 78.5M, 70.5M free. Jan 17 12:09:27.729838 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:09:27.870020 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 12:09:27.870373 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:09:27.870655 systemd[1]: systemd-journald.service: Consumed 3.400s CPU time. Jan 17 12:09:28.742846 kernel: ACPI: bus type drm_connector registered Jan 17 12:09:28.760857 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:09:28.775033 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:09:28.786823 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:09:28.786870 systemd[1]: Stopped verity-setup.service. Jan 17 12:09:28.805192 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:09:28.806065 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:09:28.812642 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:09:28.819481 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:09:28.825847 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:09:28.832851 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:09:28.839862 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:09:28.845530 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:09:28.852609 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:09:28.860098 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:09:28.860231 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:09:28.867088 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:09:28.867223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:09:28.874056 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:09:28.874191 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:09:28.880443 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:09:28.880564 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:09:28.887661 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:09:28.887802 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:09:28.894297 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:09:28.894428 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:09:28.900876 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:09:28.907459 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:09:28.915509 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:09:28.922973 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:09:28.940168 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:09:28.951875 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:09:28.969951 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:09:28.976367 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:09:28.976407 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:09:28.983371 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:09:28.991489 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:09:28.999353 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:09:29.005350 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:09:29.023962 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:09:29.031070 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:09:29.037664 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:09:29.038972 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:09:29.045987 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:09:29.046916 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:09:29.054951 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:09:29.069240 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:09:29.088384 systemd-journald[1253]: Time spent on flushing to /var/log/journal/d47eb41c8dcf4e059601fbdc21ca4046 is 16.334ms for 896 entries. Jan 17 12:09:29.088384 systemd-journald[1253]: System Journal (/var/log/journal/d47eb41c8dcf4e059601fbdc21ca4046) is 8.0M, max 2.6G, 2.6G free. Jan 17 12:09:29.123485 systemd-journald[1253]: Received client request to flush runtime journal. Jan 17 12:09:29.096173 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:09:29.104410 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:09:29.111233 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:09:29.118524 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:09:29.126140 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:09:29.134099 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:09:29.146568 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:09:29.150855 kernel: loop0: detected capacity change from 0 to 114328 Jan 17 12:09:29.164053 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:09:29.172838 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:09:29.181184 udevadm[1288]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:09:29.200868 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:09:29.213943 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:09:29.234184 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:09:29.234833 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:09:29.282078 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Jan 17 12:09:29.282093 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Jan 17 12:09:29.286476 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:09:29.527825 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:09:29.567825 kernel: loop1: detected capacity change from 0 to 114432 Jan 17 12:09:30.057830 kernel: loop2: detected capacity change from 0 to 189592 Jan 17 12:09:30.103814 kernel: loop3: detected capacity change from 0 to 31320 Jan 17 12:09:30.444103 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:09:30.459975 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:09:30.472824 kernel: loop4: detected capacity change from 0 to 114328 Jan 17 12:09:30.484814 kernel: loop5: detected capacity change from 0 to 114432 Jan 17 12:09:30.491920 systemd-udevd[1310]: Using default interface naming scheme 'v255'. Jan 17 12:09:30.494808 kernel: loop6: detected capacity change from 0 to 189592 Jan 17 12:09:30.507830 kernel: loop7: detected capacity change from 0 to 31320 Jan 17 12:09:30.510524 (sd-merge)[1311]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 17 12:09:30.510941 (sd-merge)[1311]: Merged extensions into '/usr'. Jan 17 12:09:30.514503 systemd[1]: Reloading requested from client PID 1284 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:09:30.514520 systemd[1]: Reloading... Jan 17 12:09:30.577954 zram_generator::config[1341]: No configuration found. Jan 17 12:09:30.720646 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:09:30.792156 systemd[1]: Reloading finished in 277 ms. Jan 17 12:09:30.827930 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:09:30.846017 systemd[1]: Starting ensure-sysext.service... Jan 17 12:09:30.851047 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:09:30.858536 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:09:30.877584 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:09:30.900692 systemd[1]: Reloading requested from client PID 1392 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:09:30.900707 systemd[1]: Reloading... Jan 17 12:09:30.937201 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:09:30.937498 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:09:30.938230 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:09:30.938477 systemd-tmpfiles[1400]: ACLs are not supported, ignoring. Jan 17 12:09:30.938530 systemd-tmpfiles[1400]: ACLs are not supported, ignoring. Jan 17 12:09:30.962420 systemd-tmpfiles[1400]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:09:30.962433 systemd-tmpfiles[1400]: Skipping /boot Jan 17 12:09:30.987496 systemd-tmpfiles[1400]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:09:30.987511 systemd-tmpfiles[1400]: Skipping /boot Jan 17 12:09:31.048825 zram_generator::config[1449]: No configuration found. Jan 17 12:09:31.048897 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:09:31.094625 kernel: hv_vmbus: registering driver hv_balloon Jan 17 12:09:31.094706 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 17 12:09:31.100669 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 17 12:09:31.141832 kernel: hv_vmbus: registering driver hyperv_fb Jan 17 12:09:31.154421 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 17 12:09:31.154497 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 17 12:09:31.159692 kernel: Console: switching to colour dummy device 80x25 Jan 17 12:09:31.167079 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:09:31.201266 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:09:31.226819 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1405) Jan 17 12:09:31.298294 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 17 12:09:31.298640 systemd[1]: Reloading finished in 396 ms. Jan 17 12:09:31.313210 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:09:31.344061 systemd[1]: Finished ensure-sysext.service. Jan 17 12:09:31.368104 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 12:09:31.385072 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:09:31.392935 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:09:31.399657 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:09:31.401404 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:09:31.409989 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:09:31.419054 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:09:31.438357 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:09:31.444567 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:09:31.446572 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:09:31.454467 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:09:31.464243 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:09:31.473164 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:09:31.491322 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:09:31.502724 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:09:31.513567 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:31.521994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:09:31.522181 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:09:31.533040 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:09:31.534224 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:09:31.544567 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:09:31.544747 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:09:31.553621 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:09:31.553973 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:09:31.562545 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:09:31.580119 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:09:31.590481 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:09:31.602349 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:09:31.618229 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:09:31.634067 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:09:31.640556 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:09:31.640631 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:09:31.646186 augenrules[1584]: No rules Jan 17 12:09:31.647881 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:09:31.709007 systemd-resolved[1569]: Positive Trust Anchors: Jan 17 12:09:31.709025 systemd-resolved[1569]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:09:31.709058 systemd-resolved[1569]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:09:31.727227 systemd-resolved[1569]: Using system hostname 'ci-4081.3.0-a-4140a712f6'. Jan 17 12:09:31.728666 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:09:31.735013 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:09:31.754009 lvm[1599]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:09:31.777873 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:09:31.787484 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:09:31.800035 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:09:31.801310 systemd-networkd[1409]: lo: Link UP Jan 17 12:09:31.801314 systemd-networkd[1409]: lo: Gained carrier Jan 17 12:09:31.803214 systemd-networkd[1409]: Enumeration completed Jan 17 12:09:31.805921 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:09:31.805931 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:09:31.808426 lvm[1607]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:09:31.809514 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:09:31.816753 systemd[1]: Reached target network.target - Network. Jan 17 12:09:31.825973 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:09:31.839112 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:09:31.882906 kernel: mlx5_core bcb1:00:02.0 enP48305s1: Link up Jan 17 12:09:31.909893 kernel: hv_netvsc 002248b9-7d95-0022-48b9-7d95002248b9 eth0: Data path switched to VF: enP48305s1 Jan 17 12:09:31.910744 systemd-networkd[1409]: enP48305s1: Link UP Jan 17 12:09:31.911013 systemd-networkd[1409]: eth0: Link UP Jan 17 12:09:31.911024 systemd-networkd[1409]: eth0: Gained carrier Jan 17 12:09:31.911039 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:09:31.915099 systemd-networkd[1409]: enP48305s1: Gained carrier Jan 17 12:09:31.925837 systemd-networkd[1409]: eth0: DHCPv4 address 10.200.20.31/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 12:09:31.952331 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:32.163346 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:09:32.171064 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:09:33.362942 systemd-networkd[1409]: enP48305s1: Gained IPv6LL Jan 17 12:09:33.874934 systemd-networkd[1409]: eth0: Gained IPv6LL Jan 17 12:09:33.877471 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:09:33.884982 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:09:35.196320 ldconfig[1279]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:09:35.208936 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:09:35.219949 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:09:35.233249 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:09:35.239404 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:09:35.245175 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:09:35.251738 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:09:35.259296 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:09:35.265202 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:09:35.272142 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:09:35.279093 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:09:35.279126 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:09:35.284001 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:09:35.289835 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:09:35.297206 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:09:35.310424 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:09:35.316428 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:09:35.322510 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:09:35.327634 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:09:35.332673 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:09:35.332700 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:09:35.335204 systemd[1]: Starting chronyd.service - NTP client/server... Jan 17 12:09:35.342940 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:09:35.356879 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:09:35.363961 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:09:35.369954 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:09:35.378949 (chronyd)[1620]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 17 12:09:35.379951 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:09:35.382317 jq[1626]: false Jan 17 12:09:35.385949 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:09:35.385989 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 17 12:09:35.394577 KVP[1628]: KVP starting; pid is:1628 Jan 17 12:09:35.395576 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 17 12:09:35.401687 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 17 12:09:35.402929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:35.406149 chronyd[1632]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 17 12:09:35.409961 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:09:35.417134 KVP[1628]: KVP LIC Version: 3.1 Jan 17 12:09:35.420943 kernel: hv_utils: KVP IC version 4.0 Jan 17 12:09:35.422113 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:09:35.430080 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:09:35.436325 chronyd[1632]: Timezone right/UTC failed leap second check, ignoring Jan 17 12:09:35.436509 chronyd[1632]: Loaded seccomp filter (level 2) Jan 17 12:09:35.440387 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:09:35.449012 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:09:35.459984 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:09:35.468386 extend-filesystems[1627]: Found loop4 Jan 17 12:09:35.476239 extend-filesystems[1627]: Found loop5 Jan 17 12:09:35.476239 extend-filesystems[1627]: Found loop6 Jan 17 12:09:35.476239 extend-filesystems[1627]: Found loop7 Jan 17 12:09:35.476239 extend-filesystems[1627]: Found sda Jan 17 12:09:35.476239 extend-filesystems[1627]: Found sda1 Jan 17 12:09:35.476239 extend-filesystems[1627]: Found sda2 Jan 17 12:09:35.476239 extend-filesystems[1627]: Found sda3 Jan 17 12:09:35.476239 extend-filesystems[1627]: Found usr Jan 17 12:09:35.476239 extend-filesystems[1627]: Found sda4 Jan 17 12:09:35.476239 extend-filesystems[1627]: Found sda6 Jan 17 12:09:35.476239 extend-filesystems[1627]: Found sda7 Jan 17 12:09:35.476239 extend-filesystems[1627]: Found sda9 Jan 17 12:09:35.476239 extend-filesystems[1627]: Checking size of /dev/sda9 Jan 17 12:09:35.469226 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:09:35.675187 coreos-metadata[1622]: Jan 17 12:09:35.674 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 12:09:35.675389 extend-filesystems[1627]: Old size kept for /dev/sda9 Jan 17 12:09:35.675389 extend-filesystems[1627]: Found sr0 Jan 17 12:09:35.570095 dbus-daemon[1625]: [system] SELinux support is enabled Jan 17 12:09:35.473312 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:09:35.703909 coreos-metadata[1622]: Jan 17 12:09:35.680 INFO Fetch successful Jan 17 12:09:35.703909 coreos-metadata[1622]: Jan 17 12:09:35.680 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 17 12:09:35.703909 coreos-metadata[1622]: Jan 17 12:09:35.687 INFO Fetch successful Jan 17 12:09:35.703909 coreos-metadata[1622]: Jan 17 12:09:35.692 INFO Fetching http://168.63.129.16/machine/59f9591e-3b68-465e-8b6c-2573a115b31c/d13d15d1%2Db45f%2D43eb%2D9bb8%2D2829e0122a6b.%5Fci%2D4081.3.0%2Da%2D4140a712f6?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 17 12:09:35.703909 coreos-metadata[1622]: Jan 17 12:09:35.695 INFO Fetch successful Jan 17 12:09:35.703909 coreos-metadata[1622]: Jan 17 12:09:35.695 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 17 12:09:35.696288 dbus-daemon[1625]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 12:09:35.704101 update_engine[1650]: I20250117 12:09:35.581234 1650 main.cc:92] Flatcar Update Engine starting Jan 17 12:09:35.704101 update_engine[1650]: I20250117 12:09:35.590379 1650 update_check_scheduler.cc:74] Next update check in 2m19s Jan 17 12:09:35.484142 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:09:35.704444 jq[1653]: true Jan 17 12:09:35.495515 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:09:35.509053 systemd[1]: Started chronyd.service - NTP client/server. Jan 17 12:09:35.527312 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:09:35.704918 tar[1669]: linux-arm64/helm Jan 17 12:09:35.527494 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:09:35.705181 jq[1675]: true Jan 17 12:09:35.529153 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:09:35.529296 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:09:35.556270 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:09:35.556450 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:09:35.571780 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:09:35.586289 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:09:35.591597 systemd-logind[1641]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 17 12:09:35.601922 systemd-logind[1641]: New seat seat0. Jan 17 12:09:35.633344 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:09:35.644432 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:09:35.645815 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:09:35.688738 (ntainerd)[1676]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:09:35.694949 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:09:35.694983 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:09:35.710440 coreos-metadata[1622]: Jan 17 12:09:35.710 INFO Fetch successful Jan 17 12:09:35.732559 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:09:35.732591 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:09:35.748852 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1664) Jan 17 12:09:35.791851 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:09:35.825275 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:09:35.838880 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:09:35.870056 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:09:35.915502 bash[1733]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:09:35.919196 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:09:35.931475 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 12:09:35.962476 sshd_keygen[1651]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:09:35.996459 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:09:36.013083 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:09:36.034770 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 17 12:09:36.048735 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:09:36.049737 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:09:36.077046 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:09:36.106645 locksmithd[1734]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:09:36.108195 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 17 12:09:36.118283 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:09:36.137123 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:09:36.151162 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 17 12:09:36.160420 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:09:36.286826 tar[1669]: linux-arm64/LICENSE Jan 17 12:09:36.286826 tar[1669]: linux-arm64/README.md Jan 17 12:09:36.298001 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:09:36.604623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:36.606011 (kubelet)[1778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:09:36.609859 containerd[1676]: time="2025-01-17T12:09:36.609769560Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:09:36.639852 containerd[1676]: time="2025-01-17T12:09:36.639752840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:36.641051 containerd[1676]: time="2025-01-17T12:09:36.641014640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:36.641051 containerd[1676]: time="2025-01-17T12:09:36.641049280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:09:36.641118 containerd[1676]: time="2025-01-17T12:09:36.641066640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:09:36.641556 containerd[1676]: time="2025-01-17T12:09:36.641230240Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:09:36.641556 containerd[1676]: time="2025-01-17T12:09:36.641261040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:36.641556 containerd[1676]: time="2025-01-17T12:09:36.641319200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:36.641556 containerd[1676]: time="2025-01-17T12:09:36.641332240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:36.641556 containerd[1676]: time="2025-01-17T12:09:36.641481680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:36.641556 containerd[1676]: time="2025-01-17T12:09:36.641496160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:36.641556 containerd[1676]: time="2025-01-17T12:09:36.641509080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:36.641556 containerd[1676]: time="2025-01-17T12:09:36.641518160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:36.641734 containerd[1676]: time="2025-01-17T12:09:36.641582560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:36.641845 containerd[1676]: time="2025-01-17T12:09:36.641788760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:36.641952 containerd[1676]: time="2025-01-17T12:09:36.641927000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:36.641952 containerd[1676]: time="2025-01-17T12:09:36.641948440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:09:36.642572 containerd[1676]: time="2025-01-17T12:09:36.642029600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:09:36.642572 containerd[1676]: time="2025-01-17T12:09:36.642079400Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:09:36.654564 containerd[1676]: time="2025-01-17T12:09:36.654523280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:09:36.654676 containerd[1676]: time="2025-01-17T12:09:36.654582360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:09:36.654676 containerd[1676]: time="2025-01-17T12:09:36.654600520Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:09:36.654676 containerd[1676]: time="2025-01-17T12:09:36.654628200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:09:36.654676 containerd[1676]: time="2025-01-17T12:09:36.654642000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:09:36.654837 containerd[1676]: time="2025-01-17T12:09:36.654785040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:09:36.655060 containerd[1676]: time="2025-01-17T12:09:36.655035760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:09:36.655198 containerd[1676]: time="2025-01-17T12:09:36.655172600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:09:36.655227 containerd[1676]: time="2025-01-17T12:09:36.655200840Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:09:36.655227 containerd[1676]: time="2025-01-17T12:09:36.655214960Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:09:36.655280 containerd[1676]: time="2025-01-17T12:09:36.655228880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:09:36.655280 containerd[1676]: time="2025-01-17T12:09:36.655241120Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:09:36.655280 containerd[1676]: time="2025-01-17T12:09:36.655253640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:09:36.655280 containerd[1676]: time="2025-01-17T12:09:36.655267200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:09:36.656561 containerd[1676]: time="2025-01-17T12:09:36.655281680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:09:36.656561 containerd[1676]: time="2025-01-17T12:09:36.655474880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:09:36.656561 containerd[1676]: time="2025-01-17T12:09:36.655498760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:09:36.656561 containerd[1676]: time="2025-01-17T12:09:36.655516120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:09:36.656561 containerd[1676]: time="2025-01-17T12:09:36.655546320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:09:36.656561 containerd[1676]: time="2025-01-17T12:09:36.655641440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:09:36.656561 containerd[1676]: time="2025-01-17T12:09:36.655670520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:09:36.656561 containerd[1676]: time="2025-01-17T12:09:36.655691800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:09:36.656561 containerd[1676]: time="2025-01-17T12:09:36.655712840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:09:36.656561 containerd[1676]: time="2025-01-17T12:09:36.655756400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:09:36.656561 containerd[1676]: time="2025-01-17T12:09:36.655772240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:09:36.656561 containerd[1676]: time="2025-01-17T12:09:36.655791160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:09:36.656561 containerd[1676]: time="2025-01-17T12:09:36.655835440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:09:36.656561 containerd[1676]: time="2025-01-17T12:09:36.655858880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:09:36.656938 containerd[1676]: time="2025-01-17T12:09:36.655876800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:09:36.656938 containerd[1676]: time="2025-01-17T12:09:36.655895600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:09:36.656938 containerd[1676]: time="2025-01-17T12:09:36.655912600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:09:36.656938 containerd[1676]: time="2025-01-17T12:09:36.655939120Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:09:36.656938 containerd[1676]: time="2025-01-17T12:09:36.655973280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:09:36.656938 containerd[1676]: time="2025-01-17T12:09:36.655988800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:09:36.656938 containerd[1676]: time="2025-01-17T12:09:36.656004520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:09:36.656938 containerd[1676]: time="2025-01-17T12:09:36.656062840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:09:36.656938 containerd[1676]: time="2025-01-17T12:09:36.656086760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:09:36.656938 containerd[1676]: time="2025-01-17T12:09:36.656103160Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:09:36.656938 containerd[1676]: time="2025-01-17T12:09:36.656122640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:09:36.656938 containerd[1676]: time="2025-01-17T12:09:36.656132960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:09:36.656938 containerd[1676]: time="2025-01-17T12:09:36.656152960Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:09:36.656938 containerd[1676]: time="2025-01-17T12:09:36.656167040Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:09:36.657210 containerd[1676]: time="2025-01-17T12:09:36.656177840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:09:36.659387 containerd[1676]: time="2025-01-17T12:09:36.658714680Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:09:36.659387 containerd[1676]: time="2025-01-17T12:09:36.659277240Z" level=info msg="Connect containerd service" Jan 17 12:09:36.659387 containerd[1676]: time="2025-01-17T12:09:36.659336880Z" level=info msg="using legacy CRI server" Jan 17 12:09:36.659387 containerd[1676]: time="2025-01-17T12:09:36.659344920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:09:36.660150 containerd[1676]: time="2025-01-17T12:09:36.659911600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:09:36.661603 containerd[1676]: time="2025-01-17T12:09:36.661569960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:09:36.661841 containerd[1676]: time="2025-01-17T12:09:36.661765240Z" level=info msg="Start subscribing containerd event" Jan 17 12:09:36.662143 containerd[1676]: time="2025-01-17T12:09:36.662007080Z" level=info msg="Start recovering state" Jan 17 12:09:36.662143 containerd[1676]: time="2025-01-17T12:09:36.661889080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:09:36.662312 containerd[1676]: time="2025-01-17T12:09:36.662284000Z" level=info msg="Start event monitor" Jan 17 12:09:36.662520 containerd[1676]: time="2025-01-17T12:09:36.662447560Z" level=info msg="Start snapshots syncer" Jan 17 12:09:36.662520 containerd[1676]: time="2025-01-17T12:09:36.662464440Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:09:36.662520 containerd[1676]: time="2025-01-17T12:09:36.662472440Z" level=info msg="Start streaming server" Jan 17 12:09:36.663142 containerd[1676]: time="2025-01-17T12:09:36.662388520Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:09:36.663900 containerd[1676]: time="2025-01-17T12:09:36.663879200Z" level=info msg="containerd successfully booted in 0.054953s" Jan 17 12:09:36.664076 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:09:36.672462 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:09:36.679987 systemd[1]: Startup finished in 686ms (kernel) + 13.034s (initrd) + 11.451s (userspace) = 25.172s. Jan 17 12:09:37.047223 login[1768]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:37.052290 login[1769]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:37.059187 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:09:37.067066 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:09:37.074013 systemd-logind[1641]: New session 1 of user core. Jan 17 12:09:37.081035 systemd-logind[1641]: New session 2 of user core. Jan 17 12:09:37.084553 kubelet[1778]: E0117 12:09:37.084287 1778 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:09:37.086165 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:09:37.086516 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:09:37.086631 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:09:37.095199 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:09:37.098182 (systemd)[1797]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:09:37.233412 systemd[1797]: Queued start job for default target default.target. Jan 17 12:09:37.241778 systemd[1797]: Created slice app.slice - User Application Slice. Jan 17 12:09:37.241836 systemd[1797]: Reached target paths.target - Paths. Jan 17 12:09:37.241848 systemd[1797]: Reached target timers.target - Timers. Jan 17 12:09:37.243071 systemd[1797]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:09:37.252882 systemd[1797]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:09:37.252938 systemd[1797]: Reached target sockets.target - Sockets. Jan 17 12:09:37.252949 systemd[1797]: Reached target basic.target - Basic System. Jan 17 12:09:37.252986 systemd[1797]: Reached target default.target - Main User Target. Jan 17 12:09:37.253010 systemd[1797]: Startup finished in 149ms. Jan 17 12:09:37.253321 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:09:37.263018 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:09:37.263748 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:09:37.826547 waagent[1765]: 2025-01-17T12:09:37.826451Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 17 12:09:37.832667 waagent[1765]: 2025-01-17T12:09:37.832603Z INFO Daemon Daemon OS: flatcar 4081.3.0 Jan 17 12:09:37.837480 waagent[1765]: 2025-01-17T12:09:37.837432Z INFO Daemon Daemon Python: 3.11.9 Jan 17 12:09:37.841965 waagent[1765]: 2025-01-17T12:09:37.841894Z INFO Daemon Daemon Run daemon Jan 17 12:09:37.846416 waagent[1765]: 2025-01-17T12:09:37.846361Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.0' Jan 17 12:09:37.855921 waagent[1765]: 2025-01-17T12:09:37.855870Z INFO Daemon Daemon Using waagent for provisioning Jan 17 12:09:37.861524 waagent[1765]: 2025-01-17T12:09:37.861479Z INFO Daemon Daemon Activate resource disk Jan 17 12:09:37.866610 waagent[1765]: 2025-01-17T12:09:37.866564Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 17 12:09:37.878525 waagent[1765]: 2025-01-17T12:09:37.878470Z INFO Daemon Daemon Found device: None Jan 17 12:09:37.883331 waagent[1765]: 2025-01-17T12:09:37.883283Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 17 12:09:37.892361 waagent[1765]: 2025-01-17T12:09:37.892315Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 17 12:09:37.905909 waagent[1765]: 2025-01-17T12:09:37.905853Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 12:09:37.911624 waagent[1765]: 2025-01-17T12:09:37.911574Z INFO Daemon Daemon Running default provisioning handler Jan 17 12:09:37.923653 waagent[1765]: 2025-01-17T12:09:37.923593Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 17 12:09:37.937205 waagent[1765]: 2025-01-17T12:09:37.937150Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 17 12:09:37.947669 waagent[1765]: 2025-01-17T12:09:37.947614Z INFO Daemon Daemon cloud-init is enabled: False Jan 17 12:09:37.952755 waagent[1765]: 2025-01-17T12:09:37.952708Z INFO Daemon Daemon Copying ovf-env.xml Jan 17 12:09:38.047818 waagent[1765]: 2025-01-17T12:09:38.044542Z INFO Daemon Daemon Successfully mounted dvd Jan 17 12:09:38.074671 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 17 12:09:38.077878 waagent[1765]: 2025-01-17T12:09:38.076711Z INFO Daemon Daemon Detect protocol endpoint Jan 17 12:09:38.081953 waagent[1765]: 2025-01-17T12:09:38.081893Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 12:09:38.087558 waagent[1765]: 2025-01-17T12:09:38.087511Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 17 12:09:38.094068 waagent[1765]: 2025-01-17T12:09:38.094022Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 17 12:09:38.099631 waagent[1765]: 2025-01-17T12:09:38.099581Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 17 12:09:38.104880 waagent[1765]: 2025-01-17T12:09:38.104834Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 17 12:09:38.194222 waagent[1765]: 2025-01-17T12:09:38.194171Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 17 12:09:38.201032 waagent[1765]: 2025-01-17T12:09:38.200998Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 17 12:09:38.206708 waagent[1765]: 2025-01-17T12:09:38.206659Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 17 12:09:38.535909 waagent[1765]: 2025-01-17T12:09:38.531895Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 17 12:09:38.539075 waagent[1765]: 2025-01-17T12:09:38.539007Z INFO Daemon Daemon Forcing an update of the goal state. Jan 17 12:09:38.548514 waagent[1765]: 2025-01-17T12:09:38.548461Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 12:09:38.603563 waagent[1765]: 2025-01-17T12:09:38.603510Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 17 12:09:38.609463 waagent[1765]: 2025-01-17T12:09:38.609413Z INFO Daemon Jan 17 12:09:38.612350 waagent[1765]: 2025-01-17T12:09:38.612304Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 2550584a-71f4-4745-8e51-f708d458fc62 eTag: 17165147878987133837 source: Fabric] Jan 17 12:09:38.623910 waagent[1765]: 2025-01-17T12:09:38.623862Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 17 12:09:38.630617 waagent[1765]: 2025-01-17T12:09:38.630569Z INFO Daemon Jan 17 12:09:38.633394 waagent[1765]: 2025-01-17T12:09:38.633347Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 17 12:09:38.643886 waagent[1765]: 2025-01-17T12:09:38.643846Z INFO Daemon Daemon Downloading artifacts profile blob Jan 17 12:09:38.728255 waagent[1765]: 2025-01-17T12:09:38.728185Z INFO Daemon Downloaded certificate {'thumbprint': '91235BC91D52E01554639F592C673B7175EE009F', 'hasPrivateKey': True} Jan 17 12:09:38.738433 waagent[1765]: 2025-01-17T12:09:38.738386Z INFO Daemon Downloaded certificate {'thumbprint': '67569DC958CD391C4D691C831AD905263C57C32B', 'hasPrivateKey': False} Jan 17 12:09:38.748447 waagent[1765]: 2025-01-17T12:09:38.748400Z INFO Daemon Fetch goal state completed Jan 17 12:09:38.761162 waagent[1765]: 2025-01-17T12:09:38.761120Z INFO Daemon Daemon Starting provisioning Jan 17 12:09:38.766199 waagent[1765]: 2025-01-17T12:09:38.766151Z INFO Daemon Daemon Handle ovf-env.xml. Jan 17 12:09:38.770745 waagent[1765]: 2025-01-17T12:09:38.770706Z INFO Daemon Daemon Set hostname [ci-4081.3.0-a-4140a712f6] Jan 17 12:09:38.778290 waagent[1765]: 2025-01-17T12:09:38.778236Z INFO Daemon Daemon Publish hostname [ci-4081.3.0-a-4140a712f6] Jan 17 12:09:38.784759 waagent[1765]: 2025-01-17T12:09:38.784713Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 17 12:09:38.791819 waagent[1765]: 2025-01-17T12:09:38.791741Z INFO Daemon Daemon Primary interface is [eth0] Jan 17 12:09:38.833977 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:09:38.833986 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:09:38.834031 systemd-networkd[1409]: eth0: DHCP lease lost Jan 17 12:09:38.835128 waagent[1765]: 2025-01-17T12:09:38.835072Z INFO Daemon Daemon Create user account if not exists Jan 17 12:09:38.840798 waagent[1765]: 2025-01-17T12:09:38.840747Z INFO Daemon Daemon User core already exists, skip useradd Jan 17 12:09:38.846455 systemd-networkd[1409]: eth0: DHCPv6 lease lost Jan 17 12:09:38.847026 waagent[1765]: 2025-01-17T12:09:38.846971Z INFO Daemon Daemon Configure sudoer Jan 17 12:09:38.851924 waagent[1765]: 2025-01-17T12:09:38.851875Z INFO Daemon Daemon Configure sshd Jan 17 12:09:38.857116 waagent[1765]: 2025-01-17T12:09:38.857059Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 17 12:09:38.870013 waagent[1765]: 2025-01-17T12:09:38.869962Z INFO Daemon Daemon Deploy ssh public key. Jan 17 12:09:38.878857 systemd-networkd[1409]: eth0: DHCPv4 address 10.200.20.31/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 12:09:39.988805 waagent[1765]: 2025-01-17T12:09:39.984207Z INFO Daemon Daemon Provisioning complete Jan 17 12:09:40.001326 waagent[1765]: 2025-01-17T12:09:40.001278Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 17 12:09:40.007440 waagent[1765]: 2025-01-17T12:09:40.007397Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 17 12:09:40.016778 waagent[1765]: 2025-01-17T12:09:40.016738Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 17 12:09:40.141442 waagent[1853]: 2025-01-17T12:09:40.141375Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 17 12:09:40.142318 waagent[1853]: 2025-01-17T12:09:40.141843Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.0 Jan 17 12:09:40.142318 waagent[1853]: 2025-01-17T12:09:40.141927Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 17 12:09:40.207827 waagent[1853]: 2025-01-17T12:09:40.207094Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 17 12:09:40.207827 waagent[1853]: 2025-01-17T12:09:40.207299Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 12:09:40.207827 waagent[1853]: 2025-01-17T12:09:40.207357Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 12:09:40.214999 waagent[1853]: 2025-01-17T12:09:40.214940Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 12:09:40.230364 waagent[1853]: 2025-01-17T12:09:40.230319Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 17 12:09:40.230822 waagent[1853]: 2025-01-17T12:09:40.230762Z INFO ExtHandler Jan 17 12:09:40.230903 waagent[1853]: 2025-01-17T12:09:40.230869Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 5862f9a7-f7f3-4231-811d-bdef65396323 eTag: 17165147878987133837 source: Fabric] Jan 17 12:09:40.231212 waagent[1853]: 2025-01-17T12:09:40.231171Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 17 12:09:40.231751 waagent[1853]: 2025-01-17T12:09:40.231704Z INFO ExtHandler Jan 17 12:09:40.231849 waagent[1853]: 2025-01-17T12:09:40.231785Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 17 12:09:40.235460 waagent[1853]: 2025-01-17T12:09:40.235426Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 17 12:09:40.305015 waagent[1853]: 2025-01-17T12:09:40.304902Z INFO ExtHandler Downloaded certificate {'thumbprint': '91235BC91D52E01554639F592C673B7175EE009F', 'hasPrivateKey': True} Jan 17 12:09:40.305349 waagent[1853]: 2025-01-17T12:09:40.305303Z INFO ExtHandler Downloaded certificate {'thumbprint': '67569DC958CD391C4D691C831AD905263C57C32B', 'hasPrivateKey': False} Jan 17 12:09:40.305727 waagent[1853]: 2025-01-17T12:09:40.305687Z INFO ExtHandler Fetch goal state completed Jan 17 12:09:40.320605 waagent[1853]: 2025-01-17T12:09:40.320556Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1853 Jan 17 12:09:40.320744 waagent[1853]: 2025-01-17T12:09:40.320707Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 17 12:09:40.322309 waagent[1853]: 2025-01-17T12:09:40.322264Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 17 12:09:40.322679 waagent[1853]: 2025-01-17T12:09:40.322640Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 17 12:09:40.325882 waagent[1853]: 2025-01-17T12:09:40.325845Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 17 12:09:40.326052 waagent[1853]: 2025-01-17T12:09:40.326012Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 17 12:09:40.331638 waagent[1853]: 2025-01-17T12:09:40.331592Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 17 12:09:40.337611 systemd[1]: Reloading requested from client PID 1868 ('systemctl') (unit waagent.service)... Jan 17 12:09:40.337834 systemd[1]: Reloading... Jan 17 12:09:40.413840 zram_generator::config[1902]: No configuration found. Jan 17 12:09:40.520737 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:09:40.613693 systemd[1]: Reloading finished in 275 ms. Jan 17 12:09:40.635231 waagent[1853]: 2025-01-17T12:09:40.634893Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 17 12:09:40.641855 systemd[1]: Reloading requested from client PID 1956 ('systemctl') (unit waagent.service)... Jan 17 12:09:40.641869 systemd[1]: Reloading... Jan 17 12:09:40.732849 zram_generator::config[2004]: No configuration found. Jan 17 12:09:40.821642 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:09:40.917061 systemd[1]: Reloading finished in 274 ms. Jan 17 12:09:40.941431 waagent[1853]: 2025-01-17T12:09:40.938888Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 17 12:09:40.941431 waagent[1853]: 2025-01-17T12:09:40.939075Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 17 12:09:42.082158 waagent[1853]: 2025-01-17T12:09:42.080975Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 17 12:09:42.082158 waagent[1853]: 2025-01-17T12:09:42.081565Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 17 12:09:42.082490 waagent[1853]: 2025-01-17T12:09:42.082371Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 12:09:42.082490 waagent[1853]: 2025-01-17T12:09:42.082455Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 12:09:42.082694 waagent[1853]: 2025-01-17T12:09:42.082645Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 17 12:09:42.082829 waagent[1853]: 2025-01-17T12:09:42.082754Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 17 12:09:42.082967 waagent[1853]: 2025-01-17T12:09:42.082914Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 17 12:09:42.082967 waagent[1853]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 17 12:09:42.082967 waagent[1853]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 17 12:09:42.082967 waagent[1853]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 17 12:09:42.082967 waagent[1853]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 17 12:09:42.082967 waagent[1853]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 12:09:42.082967 waagent[1853]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 12:09:42.083546 waagent[1853]: 2025-01-17T12:09:42.083490Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 17 12:09:42.084036 waagent[1853]: 2025-01-17T12:09:42.083978Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 17 12:09:42.084183 waagent[1853]: 2025-01-17T12:09:42.084142Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 12:09:42.084266 waagent[1853]: 2025-01-17T12:09:42.084235Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 12:09:42.084409 waagent[1853]: 2025-01-17T12:09:42.084371Z INFO EnvHandler ExtHandler Configure routes Jan 17 12:09:42.084467 waagent[1853]: 2025-01-17T12:09:42.084440Z INFO EnvHandler ExtHandler Gateway:None Jan 17 12:09:42.084514 waagent[1853]: 2025-01-17T12:09:42.084490Z INFO EnvHandler ExtHandler Routes:None Jan 17 12:09:42.085119 waagent[1853]: 2025-01-17T12:09:42.085073Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 17 12:09:42.085680 waagent[1853]: 2025-01-17T12:09:42.085476Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 17 12:09:42.085680 waagent[1853]: 2025-01-17T12:09:42.085548Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 17 12:09:42.085788 waagent[1853]: 2025-01-17T12:09:42.085736Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 17 12:09:42.091458 waagent[1853]: 2025-01-17T12:09:42.091405Z INFO ExtHandler ExtHandler Jan 17 12:09:42.091855 waagent[1853]: 2025-01-17T12:09:42.091790Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 17c998f3-1b26-48ff-a4c9-6781586829ba correlation a970983f-7761-4414-ba48-fb0942251319 created: 2025-01-17T12:08:22.035286Z] Jan 17 12:09:42.093145 waagent[1853]: 2025-01-17T12:09:42.093102Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 17 12:09:42.093813 waagent[1853]: 2025-01-17T12:09:42.093761Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 17 12:09:42.121973 waagent[1853]: 2025-01-17T12:09:42.121927Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6FC13014-CA35-4EF0-AFEA-08E5E39A9199;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 17 12:09:42.124841 waagent[1853]: 2025-01-17T12:09:42.124667Z INFO MonitorHandler ExtHandler Network interfaces: Jan 17 12:09:42.124841 waagent[1853]: Executing ['ip', '-a', '-o', 'link']: Jan 17 12:09:42.124841 waagent[1853]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 17 12:09:42.124841 waagent[1853]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:7d:95 brd ff:ff:ff:ff:ff:ff Jan 17 12:09:42.124841 waagent[1853]: 3: enP48305s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:7d:95 brd ff:ff:ff:ff:ff:ff\ altname enP48305p0s2 Jan 17 12:09:42.124841 waagent[1853]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 17 12:09:42.124841 waagent[1853]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 17 12:09:42.124841 waagent[1853]: 2: eth0 inet 10.200.20.31/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 17 12:09:42.124841 waagent[1853]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 17 12:09:42.124841 waagent[1853]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 17 12:09:42.124841 waagent[1853]: 2: eth0 inet6 fe80::222:48ff:feb9:7d95/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 17 12:09:42.124841 waagent[1853]: 3: enP48305s1 inet6 fe80::222:48ff:feb9:7d95/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 17 12:09:42.186610 waagent[1853]: 2025-01-17T12:09:42.186546Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 17 12:09:42.186610 waagent[1853]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 12:09:42.186610 waagent[1853]: pkts bytes target prot opt in out source destination Jan 17 12:09:42.186610 waagent[1853]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 12:09:42.186610 waagent[1853]: pkts bytes target prot opt in out source destination Jan 17 12:09:42.186610 waagent[1853]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 12:09:42.186610 waagent[1853]: pkts bytes target prot opt in out source destination Jan 17 12:09:42.186610 waagent[1853]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 12:09:42.186610 waagent[1853]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 12:09:42.186610 waagent[1853]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 12:09:42.189411 waagent[1853]: 2025-01-17T12:09:42.189354Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 17 12:09:42.189411 waagent[1853]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 12:09:42.189411 waagent[1853]: pkts bytes target prot opt in out source destination Jan 17 12:09:42.189411 waagent[1853]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 12:09:42.189411 waagent[1853]: pkts bytes target prot opt in out source destination Jan 17 12:09:42.189411 waagent[1853]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 12:09:42.189411 waagent[1853]: pkts bytes target prot opt in out source destination Jan 17 12:09:42.189411 waagent[1853]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 12:09:42.189411 waagent[1853]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 12:09:42.189411 waagent[1853]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 12:09:42.189618 waagent[1853]: 2025-01-17T12:09:42.189594Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 17 12:09:47.296389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:09:47.304966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:47.387064 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:47.390240 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:09:47.425161 kubelet[2088]: E0117 12:09:47.425094 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:09:47.427858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:09:47.428133 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:09:57.546511 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:09:57.556110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:57.636495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:57.652171 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:09:57.707050 kubelet[2103]: E0117 12:09:57.706957 2103 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:09:57.709445 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:09:57.709686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:09:59.228103 chronyd[1632]: Selected source PHC0 Jan 17 12:10:07.796461 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 12:10:07.802022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:10:07.886424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:10:07.889681 (kubelet)[2118]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:10:07.921234 kubelet[2118]: E0117 12:10:07.921197 2118 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:10:07.923087 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:10:07.923215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:10:18.046585 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 12:10:18.054966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:10:18.134546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:10:18.137789 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:10:18.170603 kubelet[2133]: E0117 12:10:18.170559 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:10:18.172294 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:10:18.172416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:10:19.238389 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 17 12:10:20.550823 update_engine[1650]: I20250117 12:10:20.550452 1650 update_attempter.cc:509] Updating boot flags... Jan 17 12:10:20.620882 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2152) Jan 17 12:10:20.708020 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2145) Jan 17 12:10:28.296443 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 12:10:28.306041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:10:28.561410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:10:28.564700 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:10:28.596486 kubelet[2214]: E0117 12:10:28.596438 2214 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:10:28.598106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:10:28.598228 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:10:38.796578 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 17 12:10:38.806066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:10:39.069613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:10:39.073018 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:10:39.106010 kubelet[2229]: E0117 12:10:39.105960 2229 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:10:39.108261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:10:39.108499 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:10:49.296436 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 17 12:10:49.304959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:10:49.591708 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:10:49.595482 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:10:49.626539 kubelet[2244]: E0117 12:10:49.626459 2244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:10:49.628311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:10:49.628438 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:10:59.223735 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:10:59.229005 systemd[1]: Started sshd@0-10.200.20.31:22-10.200.16.10:57604.service - OpenSSH per-connection server daemon (10.200.16.10:57604). Jan 17 12:10:59.790611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 17 12:10:59.800979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:10:59.866339 sshd[2252]: Accepted publickey for core from 10.200.16.10 port 57604 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:10:59.867627 sshd[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:59.871943 systemd-logind[1641]: New session 3 of user core. Jan 17 12:10:59.880927 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:11:00.077154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:11:00.084018 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:11:00.118203 kubelet[2263]: E0117 12:11:00.118103 2263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:11:00.119554 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:11:00.119683 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:11:00.272042 systemd[1]: Started sshd@1-10.200.20.31:22-10.200.16.10:57612.service - OpenSSH per-connection server daemon (10.200.16.10:57612). Jan 17 12:11:00.695818 sshd[2272]: Accepted publickey for core from 10.200.16.10 port 57612 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:11:00.697093 sshd[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:00.700758 systemd-logind[1641]: New session 4 of user core. Jan 17 12:11:00.708928 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:11:01.028278 sshd[2272]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:01.032386 systemd[1]: sshd@1-10.200.20.31:22-10.200.16.10:57612.service: Deactivated successfully. Jan 17 12:11:01.034003 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:11:01.034665 systemd-logind[1641]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:11:01.035469 systemd-logind[1641]: Removed session 4. Jan 17 12:11:01.102293 systemd[1]: Started sshd@2-10.200.20.31:22-10.200.16.10:57628.service - OpenSSH per-connection server daemon (10.200.16.10:57628). Jan 17 12:11:01.507592 sshd[2279]: Accepted publickey for core from 10.200.16.10 port 57628 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:11:01.508908 sshd[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:01.513641 systemd-logind[1641]: New session 5 of user core. Jan 17 12:11:01.518951 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:11:01.819920 sshd[2279]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:01.823292 systemd[1]: sshd@2-10.200.20.31:22-10.200.16.10:57628.service: Deactivated successfully. Jan 17 12:11:01.824944 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:11:01.825527 systemd-logind[1641]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:11:01.826457 systemd-logind[1641]: Removed session 5. Jan 17 12:11:01.900490 systemd[1]: Started sshd@3-10.200.20.31:22-10.200.16.10:57638.service - OpenSSH per-connection server daemon (10.200.16.10:57638). Jan 17 12:11:02.323320 sshd[2286]: Accepted publickey for core from 10.200.16.10 port 57638 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:11:02.324561 sshd[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:02.329342 systemd-logind[1641]: New session 6 of user core. Jan 17 12:11:02.335992 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:11:02.655929 sshd[2286]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:02.659339 systemd[1]: sshd@3-10.200.20.31:22-10.200.16.10:57638.service: Deactivated successfully. Jan 17 12:11:02.661321 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:11:02.662139 systemd-logind[1641]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:11:02.663346 systemd-logind[1641]: Removed session 6. Jan 17 12:11:02.732604 systemd[1]: Started sshd@4-10.200.20.31:22-10.200.16.10:57642.service - OpenSSH per-connection server daemon (10.200.16.10:57642). Jan 17 12:11:03.157986 sshd[2293]: Accepted publickey for core from 10.200.16.10 port 57642 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:11:03.159226 sshd[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:03.162986 systemd-logind[1641]: New session 7 of user core. Jan 17 12:11:03.171933 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:11:03.481166 sudo[2296]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:11:03.481416 sudo[2296]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:11:03.511580 sudo[2296]: pam_unix(sudo:session): session closed for user root Jan 17 12:11:03.600327 sshd[2293]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:03.604517 systemd-logind[1641]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:11:03.605417 systemd[1]: sshd@4-10.200.20.31:22-10.200.16.10:57642.service: Deactivated successfully. Jan 17 12:11:03.607291 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:11:03.608580 systemd-logind[1641]: Removed session 7. Jan 17 12:11:03.674867 systemd[1]: Started sshd@5-10.200.20.31:22-10.200.16.10:57650.service - OpenSSH per-connection server daemon (10.200.16.10:57650). Jan 17 12:11:04.079422 sshd[2301]: Accepted publickey for core from 10.200.16.10 port 57650 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:11:04.080745 sshd[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:04.084399 systemd-logind[1641]: New session 8 of user core. Jan 17 12:11:04.094991 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:11:04.314709 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:11:04.315103 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:11:04.318166 sudo[2305]: pam_unix(sudo:session): session closed for user root Jan 17 12:11:04.322299 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:11:04.322547 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:11:04.338494 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:11:04.339463 auditctl[2308]: No rules Jan 17 12:11:04.339742 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:11:04.341084 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:11:04.343224 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:11:04.364869 augenrules[2326]: No rules Jan 17 12:11:04.366257 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:11:04.367230 sudo[2304]: pam_unix(sudo:session): session closed for user root Jan 17 12:11:04.448028 sshd[2301]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:04.451339 systemd[1]: sshd@5-10.200.20.31:22-10.200.16.10:57650.service: Deactivated successfully. Jan 17 12:11:04.453213 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:11:04.454024 systemd-logind[1641]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:11:04.454770 systemd-logind[1641]: Removed session 8. Jan 17 12:11:04.522059 systemd[1]: Started sshd@6-10.200.20.31:22-10.200.16.10:57662.service - OpenSSH per-connection server daemon (10.200.16.10:57662). Jan 17 12:11:04.926594 sshd[2334]: Accepted publickey for core from 10.200.16.10 port 57662 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:11:04.927836 sshd[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:04.931556 systemd-logind[1641]: New session 9 of user core. Jan 17 12:11:04.942167 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:11:05.162434 sudo[2337]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:11:05.162694 sudo[2337]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:11:06.441007 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:11:06.442416 (dockerd)[2352]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:11:07.140898 dockerd[2352]: time="2025-01-17T12:11:07.140583128Z" level=info msg="Starting up" Jan 17 12:11:07.646011 dockerd[2352]: time="2025-01-17T12:11:07.645964112Z" level=info msg="Loading containers: start." Jan 17 12:11:07.791950 kernel: Initializing XFRM netlink socket Jan 17 12:11:07.948180 systemd-networkd[1409]: docker0: Link UP Jan 17 12:11:07.972910 dockerd[2352]: time="2025-01-17T12:11:07.972872261Z" level=info msg="Loading containers: done." Jan 17 12:11:07.991855 dockerd[2352]: time="2025-01-17T12:11:07.991786752Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:11:07.991991 dockerd[2352]: time="2025-01-17T12:11:07.991911272Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:11:07.992055 dockerd[2352]: time="2025-01-17T12:11:07.992028632Z" level=info msg="Daemon has completed initialization" Jan 17 12:11:08.044332 dockerd[2352]: time="2025-01-17T12:11:08.043937330Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:11:08.044589 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:11:08.585159 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck677750933-merged.mount: Deactivated successfully. Jan 17 12:11:08.809308 containerd[1676]: time="2025-01-17T12:11:08.809254806Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 17 12:11:09.873924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount26889524.mount: Deactivated successfully. Jan 17 12:11:10.296321 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 17 12:11:10.302997 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:11:10.406603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:11:10.411175 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:11:10.445197 kubelet[2520]: E0117 12:11:10.445129 2520 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:11:10.447253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:11:10.447508 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:11:11.483115 containerd[1676]: time="2025-01-17T12:11:11.483056337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:11.486054 containerd[1676]: time="2025-01-17T12:11:11.486025941Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618070" Jan 17 12:11:11.490134 containerd[1676]: time="2025-01-17T12:11:11.490088787Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:11.494815 containerd[1676]: time="2025-01-17T12:11:11.494759954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:11.496180 containerd[1676]: time="2025-01-17T12:11:11.495883275Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 2.686590749s" Jan 17 12:11:11.496180 containerd[1676]: time="2025-01-17T12:11:11.495917235Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 17 12:11:11.496717 containerd[1676]: time="2025-01-17T12:11:11.496537196Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 17 12:11:13.308139 containerd[1676]: time="2025-01-17T12:11:13.308098300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:13.311143 containerd[1676]: time="2025-01-17T12:11:13.311108305Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469467" Jan 17 12:11:13.314507 containerd[1676]: time="2025-01-17T12:11:13.314456390Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:13.320502 containerd[1676]: time="2025-01-17T12:11:13.320454279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:13.321736 containerd[1676]: time="2025-01-17T12:11:13.321625560Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 1.825061124s" Jan 17 12:11:13.321736 containerd[1676]: time="2025-01-17T12:11:13.321658440Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 17 12:11:13.322289 containerd[1676]: time="2025-01-17T12:11:13.322253761Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 17 12:11:14.726048 containerd[1676]: time="2025-01-17T12:11:14.725993785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:14.728406 containerd[1676]: time="2025-01-17T12:11:14.728256989Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024217" Jan 17 12:11:14.732937 containerd[1676]: time="2025-01-17T12:11:14.732910356Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:14.738078 containerd[1676]: time="2025-01-17T12:11:14.738025923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:14.739456 containerd[1676]: time="2025-01-17T12:11:14.739066525Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.416706564s" Jan 17 12:11:14.739456 containerd[1676]: time="2025-01-17T12:11:14.739102125Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 17 12:11:14.739697 containerd[1676]: time="2025-01-17T12:11:14.739665326Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 17 12:11:16.066150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3942240030.mount: Deactivated successfully. Jan 17 12:11:16.479820 containerd[1676]: time="2025-01-17T12:11:16.478934969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:16.483197 containerd[1676]: time="2025-01-17T12:11:16.483168701Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772117" Jan 17 12:11:16.485992 containerd[1676]: time="2025-01-17T12:11:16.485948349Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:16.490997 containerd[1676]: time="2025-01-17T12:11:16.490948682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:16.491909 containerd[1676]: time="2025-01-17T12:11:16.491730285Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.752031439s" Jan 17 12:11:16.491909 containerd[1676]: time="2025-01-17T12:11:16.491758885Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 17 12:11:16.492498 containerd[1676]: time="2025-01-17T12:11:16.492165206Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:11:17.165393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount860120530.mount: Deactivated successfully. Jan 17 12:11:18.135621 containerd[1676]: time="2025-01-17T12:11:18.135566729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:18.138283 containerd[1676]: time="2025-01-17T12:11:18.138251176Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 17 12:11:18.151380 containerd[1676]: time="2025-01-17T12:11:18.149966528Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:18.155051 containerd[1676]: time="2025-01-17T12:11:18.155022862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:18.157073 containerd[1676]: time="2025-01-17T12:11:18.157034068Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.664837102s" Jan 17 12:11:18.157073 containerd[1676]: time="2025-01-17T12:11:18.157072748Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 17 12:11:18.157751 containerd[1676]: time="2025-01-17T12:11:18.157723670Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 12:11:18.819939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4132204386.mount: Deactivated successfully. Jan 17 12:11:18.849500 containerd[1676]: time="2025-01-17T12:11:18.849444614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:18.852346 containerd[1676]: time="2025-01-17T12:11:18.852315181Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 17 12:11:18.856272 containerd[1676]: time="2025-01-17T12:11:18.856243832Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:18.861039 containerd[1676]: time="2025-01-17T12:11:18.860988525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:18.861632 containerd[1676]: time="2025-01-17T12:11:18.861598607Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 703.844377ms" Jan 17 12:11:18.861632 containerd[1676]: time="2025-01-17T12:11:18.861629527Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 17 12:11:18.862296 containerd[1676]: time="2025-01-17T12:11:18.862059368Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 17 12:11:20.166971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3296634986.mount: Deactivated successfully. Jan 17 12:11:20.546343 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 17 12:11:20.554061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:11:20.684948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:11:20.688345 (kubelet)[2655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:11:20.724877 kubelet[2655]: E0117 12:11:20.724833 2655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:11:20.729892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:11:20.730166 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:11:23.236386 containerd[1676]: time="2025-01-17T12:11:23.236333449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:23.341469 containerd[1676]: time="2025-01-17T12:11:23.341413682Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Jan 17 12:11:23.345234 containerd[1676]: time="2025-01-17T12:11:23.345174451Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:23.350901 containerd[1676]: time="2025-01-17T12:11:23.350844306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:23.352869 containerd[1676]: time="2025-01-17T12:11:23.351999789Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.489911381s" Jan 17 12:11:23.352869 containerd[1676]: time="2025-01-17T12:11:23.352031669Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 17 12:11:28.663885 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:11:28.674019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:11:28.698783 systemd[1]: Reloading requested from client PID 2713 ('systemctl') (unit session-9.scope)... Jan 17 12:11:28.698815 systemd[1]: Reloading... Jan 17 12:11:28.788826 zram_generator::config[2753]: No configuration found. Jan 17 12:11:28.904086 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:11:28.996138 systemd[1]: Reloading finished in 296 ms. Jan 17 12:11:29.071533 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:11:29.071630 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:11:29.071921 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:11:29.079097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:11:30.291962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:11:30.296415 (kubelet)[2817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:11:30.327550 kubelet[2817]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:11:30.327550 kubelet[2817]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:11:30.327550 kubelet[2817]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:11:30.327868 kubelet[2817]: I0117 12:11:30.327618 2817 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:11:31.983819 kubelet[2817]: I0117 12:11:31.983453 2817 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 17 12:11:31.983819 kubelet[2817]: I0117 12:11:31.983483 2817 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:11:31.983819 kubelet[2817]: I0117 12:11:31.983701 2817 server.go:929] "Client rotation is on, will bootstrap in background" Jan 17 12:11:32.003468 kubelet[2817]: E0117 12:11:32.003415 2817 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:11:32.004299 kubelet[2817]: I0117 12:11:32.004270 2817 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:11:32.009892 kubelet[2817]: E0117 12:11:32.009839 2817 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 12:11:32.009892 kubelet[2817]: I0117 12:11:32.009876 2817 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 12:11:32.013496 kubelet[2817]: I0117 12:11:32.013470 2817 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:11:32.013574 kubelet[2817]: I0117 12:11:32.013568 2817 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 17 12:11:32.013695 kubelet[2817]: I0117 12:11:32.013668 2817 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:11:32.013872 kubelet[2817]: I0117 12:11:32.013694 2817 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-4140a712f6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 12:11:32.013950 kubelet[2817]: I0117 12:11:32.013884 2817 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:11:32.013950 kubelet[2817]: I0117 12:11:32.013894 2817 container_manager_linux.go:300] "Creating device plugin manager" Jan 17 12:11:32.014015 kubelet[2817]: I0117 12:11:32.013999 2817 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:11:32.015165 kubelet[2817]: I0117 12:11:32.015146 2817 kubelet.go:408] "Attempting to sync node with API server" Jan 17 12:11:32.015203 kubelet[2817]: I0117 12:11:32.015169 2817 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:11:32.015203 kubelet[2817]: I0117 12:11:32.015193 2817 kubelet.go:314] "Adding apiserver pod source" Jan 17 12:11:32.015203 kubelet[2817]: I0117 12:11:32.015201 2817 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:11:32.019360 kubelet[2817]: W0117 12:11:32.019027 2817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Jan 17 12:11:32.019360 kubelet[2817]: E0117 12:11:32.019074 2817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:11:32.019360 kubelet[2817]: W0117 12:11:32.019302 2817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-4140a712f6&limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Jan 17 12:11:32.019360 kubelet[2817]: E0117 12:11:32.019333 2817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-4140a712f6&limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:11:32.019906 kubelet[2817]: I0117 12:11:32.019789 2817 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:11:32.021690 kubelet[2817]: I0117 12:11:32.021674 2817 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:11:32.022372 kubelet[2817]: W0117 12:11:32.022196 2817 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:11:32.023464 kubelet[2817]: I0117 12:11:32.023449 2817 server.go:1269] "Started kubelet" Jan 17 12:11:32.025983 kubelet[2817]: I0117 12:11:32.025643 2817 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:11:32.027851 kubelet[2817]: I0117 12:11:32.026302 2817 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:11:32.027851 kubelet[2817]: I0117 12:11:32.026353 2817 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:11:32.027851 kubelet[2817]: I0117 12:11:32.026360 2817 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:11:32.027851 kubelet[2817]: I0117 12:11:32.027141 2817 server.go:460] "Adding debug handlers to kubelet server" Jan 17 12:11:32.028487 kubelet[2817]: I0117 12:11:32.028443 2817 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 12:11:32.030380 kubelet[2817]: I0117 12:11:32.030356 2817 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 17 12:11:32.030589 kubelet[2817]: E0117 12:11:32.030561 2817 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-4140a712f6\" not found" Jan 17 12:11:32.030653 kubelet[2817]: E0117 12:11:32.029499 2817 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.31:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.31:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-4140a712f6.181b79b33dbde1af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-4140a712f6,UID:ci-4081.3.0-a-4140a712f6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-4140a712f6,},FirstTimestamp:2025-01-17 12:11:32.023419311 +0000 UTC m=+1.724475444,LastTimestamp:2025-01-17 12:11:32.023419311 +0000 UTC m=+1.724475444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-4140a712f6,}" Jan 17 12:11:32.031461 kubelet[2817]: I0117 12:11:32.031442 2817 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 12:11:32.031526 kubelet[2817]: I0117 12:11:32.031495 2817 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:11:32.032114 kubelet[2817]: W0117 12:11:32.032059 2817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Jan 17 12:11:32.032190 kubelet[2817]: E0117 12:11:32.032122 2817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:11:32.032770 kubelet[2817]: E0117 12:11:32.032717 2817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-4140a712f6?timeout=10s\": dial tcp 10.200.20.31:6443: connect: connection refused" interval="200ms" Jan 17 12:11:32.033150 kubelet[2817]: I0117 12:11:32.033108 2817 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:11:32.037159 kubelet[2817]: I0117 12:11:32.037139 2817 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:11:32.037249 kubelet[2817]: I0117 12:11:32.037240 2817 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:11:32.038870 kubelet[2817]: E0117 12:11:32.037155 2817 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:11:32.046537 kubelet[2817]: I0117 12:11:32.046484 2817 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:11:32.047385 kubelet[2817]: I0117 12:11:32.047357 2817 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:11:32.047385 kubelet[2817]: I0117 12:11:32.047382 2817 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:11:32.047471 kubelet[2817]: I0117 12:11:32.047403 2817 kubelet.go:2321] "Starting kubelet main sync loop" Jan 17 12:11:32.047471 kubelet[2817]: E0117 12:11:32.047442 2817 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:11:32.053542 kubelet[2817]: W0117 12:11:32.053501 2817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Jan 17 12:11:32.053619 kubelet[2817]: E0117 12:11:32.053553 2817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:11:32.093698 kubelet[2817]: I0117 12:11:32.093671 2817 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:11:32.093841 kubelet[2817]: I0117 12:11:32.093690 2817 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:11:32.093841 kubelet[2817]: I0117 12:11:32.093756 2817 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:11:32.099203 kubelet[2817]: I0117 12:11:32.099180 2817 policy_none.go:49] "None policy: Start" Jan 17 12:11:32.099897 kubelet[2817]: I0117 12:11:32.099877 2817 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:11:32.099959 kubelet[2817]: I0117 12:11:32.099905 2817 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:11:32.107968 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:11:32.118270 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:11:32.121426 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:11:32.131250 kubelet[2817]: E0117 12:11:32.131224 2817 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-4140a712f6\" not found" Jan 17 12:11:32.132447 kubelet[2817]: I0117 12:11:32.132422 2817 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:11:32.132639 kubelet[2817]: I0117 12:11:32.132584 2817 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 12:11:32.132639 kubelet[2817]: I0117 12:11:32.132596 2817 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:11:32.132996 kubelet[2817]: I0117 12:11:32.132974 2817 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:11:32.134559 kubelet[2817]: E0117 12:11:32.134532 2817 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-4140a712f6\" not found" Jan 17 12:11:32.157094 systemd[1]: Created slice kubepods-burstable-podfb2b777e0c41d21f357e9b0ff3850bc0.slice - libcontainer container kubepods-burstable-podfb2b777e0c41d21f357e9b0ff3850bc0.slice. Jan 17 12:11:32.158291 kubelet[2817]: E0117 12:11:32.157946 2817 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.31:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.31:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-4140a712f6.181b79b33dbde1af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-4140a712f6,UID:ci-4081.3.0-a-4140a712f6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-4140a712f6,},FirstTimestamp:2025-01-17 12:11:32.023419311 +0000 UTC m=+1.724475444,LastTimestamp:2025-01-17 12:11:32.023419311 +0000 UTC m=+1.724475444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-4140a712f6,}" Jan 17 12:11:32.171276 systemd[1]: Created slice kubepods-burstable-podd365211104e947f7a615f326eac17c6d.slice - libcontainer container kubepods-burstable-podd365211104e947f7a615f326eac17c6d.slice. Jan 17 12:11:32.184972 systemd[1]: Created slice kubepods-burstable-pod52bbf48484c22f56ae70934d01c007a4.slice - libcontainer container kubepods-burstable-pod52bbf48484c22f56ae70934d01c007a4.slice. Jan 17 12:11:32.234041 kubelet[2817]: E0117 12:11:32.233588 2817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-4140a712f6?timeout=10s\": dial tcp 10.200.20.31:6443: connect: connection refused" interval="400ms" Jan 17 12:11:32.234041 kubelet[2817]: I0117 12:11:32.233686 2817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d365211104e947f7a615f326eac17c6d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-4140a712f6\" (UID: \"d365211104e947f7a615f326eac17c6d\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:32.234041 kubelet[2817]: I0117 12:11:32.233721 2817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52bbf48484c22f56ae70934d01c007a4-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-4140a712f6\" (UID: \"52bbf48484c22f56ae70934d01c007a4\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:32.234041 kubelet[2817]: I0117 12:11:32.233740 2817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb2b777e0c41d21f357e9b0ff3850bc0-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-4140a712f6\" (UID: \"fb2b777e0c41d21f357e9b0ff3850bc0\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:32.234041 kubelet[2817]: I0117 12:11:32.233757 2817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d365211104e947f7a615f326eac17c6d-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-4140a712f6\" (UID: \"d365211104e947f7a615f326eac17c6d\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:32.234242 kubelet[2817]: I0117 12:11:32.233774 2817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d365211104e947f7a615f326eac17c6d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-4140a712f6\" (UID: \"d365211104e947f7a615f326eac17c6d\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:32.234242 kubelet[2817]: I0117 12:11:32.233858 2817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d365211104e947f7a615f326eac17c6d-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-4140a712f6\" (UID: \"d365211104e947f7a615f326eac17c6d\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:32.234242 kubelet[2817]: I0117 12:11:32.233876 2817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d365211104e947f7a615f326eac17c6d-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-4140a712f6\" (UID: \"d365211104e947f7a615f326eac17c6d\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:32.234242 kubelet[2817]: I0117 12:11:32.233891 2817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb2b777e0c41d21f357e9b0ff3850bc0-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-4140a712f6\" (UID: \"fb2b777e0c41d21f357e9b0ff3850bc0\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:32.234242 kubelet[2817]: I0117 12:11:32.233906 2817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb2b777e0c41d21f357e9b0ff3850bc0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-4140a712f6\" (UID: \"fb2b777e0c41d21f357e9b0ff3850bc0\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:32.235071 kubelet[2817]: I0117 12:11:32.234495 2817 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-4140a712f6" Jan 17 12:11:32.235071 kubelet[2817]: E0117 12:11:32.234780 2817 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.31:6443/api/v1/nodes\": dial tcp 10.200.20.31:6443: connect: connection refused" node="ci-4081.3.0-a-4140a712f6" Jan 17 12:11:32.437316 kubelet[2817]: I0117 12:11:32.437286 2817 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-4140a712f6" Jan 17 12:11:32.437675 kubelet[2817]: E0117 12:11:32.437646 2817 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.31:6443/api/v1/nodes\": dial tcp 10.200.20.31:6443: connect: connection refused" node="ci-4081.3.0-a-4140a712f6" Jan 17 12:11:32.469574 containerd[1676]: time="2025-01-17T12:11:32.469535248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-4140a712f6,Uid:fb2b777e0c41d21f357e9b0ff3850bc0,Namespace:kube-system,Attempt:0,}" Jan 17 12:11:32.483159 containerd[1676]: time="2025-01-17T12:11:32.483114322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-4140a712f6,Uid:d365211104e947f7a615f326eac17c6d,Namespace:kube-system,Attempt:0,}" Jan 17 12:11:32.487872 containerd[1676]: time="2025-01-17T12:11:32.487736013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-4140a712f6,Uid:52bbf48484c22f56ae70934d01c007a4,Namespace:kube-system,Attempt:0,}" Jan 17 12:11:32.634736 kubelet[2817]: E0117 12:11:32.634689 2817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-4140a712f6?timeout=10s\": dial tcp 10.200.20.31:6443: connect: connection refused" interval="800ms" Jan 17 12:11:32.839159 kubelet[2817]: I0117 12:11:32.839100 2817 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-4140a712f6" Jan 17 12:11:32.839416 kubelet[2817]: E0117 12:11:32.839380 2817 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.31:6443/api/v1/nodes\": dial tcp 10.200.20.31:6443: connect: connection refused" node="ci-4081.3.0-a-4140a712f6" Jan 17 12:11:33.041181 kubelet[2817]: W0117 12:11:33.041076 2817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-4140a712f6&limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Jan 17 12:11:33.041181 kubelet[2817]: E0117 12:11:33.041141 2817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-4140a712f6&limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:11:33.142244 kubelet[2817]: W0117 12:11:33.142127 2817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Jan 17 12:11:33.142244 kubelet[2817]: E0117 12:11:33.142174 2817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:11:33.197173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount363372108.mount: Deactivated successfully. Jan 17 12:11:33.229390 containerd[1676]: time="2025-01-17T12:11:33.229336237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:11:33.235484 containerd[1676]: time="2025-01-17T12:11:33.235407692Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 17 12:11:33.241835 containerd[1676]: time="2025-01-17T12:11:33.241554308Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:11:33.245453 containerd[1676]: time="2025-01-17T12:11:33.244723035Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:11:33.247566 containerd[1676]: time="2025-01-17T12:11:33.247525322Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:11:33.250984 containerd[1676]: time="2025-01-17T12:11:33.250946051Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:11:33.252525 containerd[1676]: time="2025-01-17T12:11:33.252310614Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:11:33.257229 containerd[1676]: time="2025-01-17T12:11:33.257195706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:11:33.257996 containerd[1676]: time="2025-01-17T12:11:33.257957628Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 774.774186ms" Jan 17 12:11:33.259066 containerd[1676]: time="2025-01-17T12:11:33.259035711Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 789.425503ms" Jan 17 12:11:33.266323 containerd[1676]: time="2025-01-17T12:11:33.266280408Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 778.492835ms" Jan 17 12:11:33.297303 kubelet[2817]: W0117 12:11:33.297241 2817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Jan 17 12:11:33.297431 kubelet[2817]: E0117 12:11:33.297312 2817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:11:33.435588 kubelet[2817]: E0117 12:11:33.435470 2817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-4140a712f6?timeout=10s\": dial tcp 10.200.20.31:6443: connect: connection refused" interval="1.6s" Jan 17 12:11:33.619082 kubelet[2817]: W0117 12:11:33.619001 2817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Jan 17 12:11:33.619082 kubelet[2817]: E0117 12:11:33.619043 2817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:11:33.642212 kubelet[2817]: I0117 12:11:33.641916 2817 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-4140a712f6" Jan 17 12:11:33.642212 kubelet[2817]: E0117 12:11:33.642187 2817 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.31:6443/api/v1/nodes\": dial tcp 10.200.20.31:6443: connect: connection refused" node="ci-4081.3.0-a-4140a712f6" Jan 17 12:11:33.998098 containerd[1676]: time="2025-01-17T12:11:33.997898048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:11:33.998098 containerd[1676]: time="2025-01-17T12:11:33.997957289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:11:33.998098 containerd[1676]: time="2025-01-17T12:11:33.997972769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:33.998098 containerd[1676]: time="2025-01-17T12:11:33.998054969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:34.000177 containerd[1676]: time="2025-01-17T12:11:33.999947773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:11:34.000177 containerd[1676]: time="2025-01-17T12:11:34.000001734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:11:34.000177 containerd[1676]: time="2025-01-17T12:11:34.000017654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:34.000177 containerd[1676]: time="2025-01-17T12:11:34.000103534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:34.001712 containerd[1676]: time="2025-01-17T12:11:34.001571857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:11:34.001712 containerd[1676]: time="2025-01-17T12:11:34.001623458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:11:34.001712 containerd[1676]: time="2025-01-17T12:11:34.001638898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:34.001960 containerd[1676]: time="2025-01-17T12:11:34.001906498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:34.020987 systemd[1]: Started cri-containerd-8c9aa8a14a70d5bfa98eb44be8d5e3e6834ca815dc54ce0bfa11ef5c23196f71.scope - libcontainer container 8c9aa8a14a70d5bfa98eb44be8d5e3e6834ca815dc54ce0bfa11ef5c23196f71. Jan 17 12:11:34.025436 systemd[1]: Started cri-containerd-fc2795fbdd760ce07833ea840efc63b251a1aa36179cb41fb22a377bb811b78a.scope - libcontainer container fc2795fbdd760ce07833ea840efc63b251a1aa36179cb41fb22a377bb811b78a. Jan 17 12:11:34.030541 systemd[1]: Started cri-containerd-8c1cf3b48cefd96b739f6380a895c3123ee78cd895e9a00237bca2bb46e32e24.scope - libcontainer container 8c1cf3b48cefd96b739f6380a895c3123ee78cd895e9a00237bca2bb46e32e24. Jan 17 12:11:34.075052 containerd[1676]: time="2025-01-17T12:11:34.074732717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-4140a712f6,Uid:d365211104e947f7a615f326eac17c6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c9aa8a14a70d5bfa98eb44be8d5e3e6834ca815dc54ce0bfa11ef5c23196f71\"" Jan 17 12:11:34.078308 containerd[1676]: time="2025-01-17T12:11:34.077637725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-4140a712f6,Uid:52bbf48484c22f56ae70934d01c007a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc2795fbdd760ce07833ea840efc63b251a1aa36179cb41fb22a377bb811b78a\"" Jan 17 12:11:34.080372 containerd[1676]: time="2025-01-17T12:11:34.080347131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-4140a712f6,Uid:fb2b777e0c41d21f357e9b0ff3850bc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c1cf3b48cefd96b739f6380a895c3123ee78cd895e9a00237bca2bb46e32e24\"" Jan 17 12:11:34.081488 containerd[1676]: time="2025-01-17T12:11:34.081460654Z" level=info msg="CreateContainer within sandbox \"8c9aa8a14a70d5bfa98eb44be8d5e3e6834ca815dc54ce0bfa11ef5c23196f71\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:11:34.082730 containerd[1676]: time="2025-01-17T12:11:34.082682697Z" level=info msg="CreateContainer within sandbox \"fc2795fbdd760ce07833ea840efc63b251a1aa36179cb41fb22a377bb811b78a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:11:34.085431 containerd[1676]: time="2025-01-17T12:11:34.085399104Z" level=info msg="CreateContainer within sandbox \"8c1cf3b48cefd96b739f6380a895c3123ee78cd895e9a00237bca2bb46e32e24\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:11:34.114510 kubelet[2817]: E0117 12:11:34.114471 2817 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:11:34.169280 containerd[1676]: time="2025-01-17T12:11:34.169062469Z" level=info msg="CreateContainer within sandbox \"fc2795fbdd760ce07833ea840efc63b251a1aa36179cb41fb22a377bb811b78a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d40eeea67b4dd3dbffe40efcb753f97ba1e8934d11b48c3674aad52e6f440ffc\"" Jan 17 12:11:34.173860 containerd[1676]: time="2025-01-17T12:11:34.173783361Z" level=info msg="CreateContainer within sandbox \"8c9aa8a14a70d5bfa98eb44be8d5e3e6834ca815dc54ce0bfa11ef5c23196f71\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ea1165f664083b76f40a9302282a221dcfa439eb853555852e17b3440348885b\"" Jan 17 12:11:34.174054 containerd[1676]: time="2025-01-17T12:11:34.174029602Z" level=info msg="StartContainer for \"d40eeea67b4dd3dbffe40efcb753f97ba1e8934d11b48c3674aad52e6f440ffc\"" Jan 17 12:11:34.178335 containerd[1676]: time="2025-01-17T12:11:34.178297732Z" level=info msg="CreateContainer within sandbox \"8c1cf3b48cefd96b739f6380a895c3123ee78cd895e9a00237bca2bb46e32e24\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f90645f02757b156d6d85d558ba5e36ae6796511365f638c87ed0e3cca8b1414\"" Jan 17 12:11:34.180501 containerd[1676]: time="2025-01-17T12:11:34.178719053Z" level=info msg="StartContainer for \"ea1165f664083b76f40a9302282a221dcfa439eb853555852e17b3440348885b\"" Jan 17 12:11:34.183091 containerd[1676]: time="2025-01-17T12:11:34.183067624Z" level=info msg="StartContainer for \"f90645f02757b156d6d85d558ba5e36ae6796511365f638c87ed0e3cca8b1414\"" Jan 17 12:11:34.215106 systemd[1]: Started cri-containerd-d40eeea67b4dd3dbffe40efcb753f97ba1e8934d11b48c3674aad52e6f440ffc.scope - libcontainer container d40eeea67b4dd3dbffe40efcb753f97ba1e8934d11b48c3674aad52e6f440ffc. Jan 17 12:11:34.230287 systemd[1]: run-containerd-runc-k8s.io-ea1165f664083b76f40a9302282a221dcfa439eb853555852e17b3440348885b-runc.ZKgpA9.mount: Deactivated successfully. Jan 17 12:11:34.242041 systemd[1]: Started cri-containerd-ea1165f664083b76f40a9302282a221dcfa439eb853555852e17b3440348885b.scope - libcontainer container ea1165f664083b76f40a9302282a221dcfa439eb853555852e17b3440348885b. Jan 17 12:11:34.249968 systemd[1]: Started cri-containerd-f90645f02757b156d6d85d558ba5e36ae6796511365f638c87ed0e3cca8b1414.scope - libcontainer container f90645f02757b156d6d85d558ba5e36ae6796511365f638c87ed0e3cca8b1414. Jan 17 12:11:34.294666 containerd[1676]: time="2025-01-17T12:11:34.294604218Z" level=info msg="StartContainer for \"d40eeea67b4dd3dbffe40efcb753f97ba1e8934d11b48c3674aad52e6f440ffc\" returns successfully" Jan 17 12:11:34.311847 containerd[1676]: time="2025-01-17T12:11:34.311808741Z" level=info msg="StartContainer for \"f90645f02757b156d6d85d558ba5e36ae6796511365f638c87ed0e3cca8b1414\" returns successfully" Jan 17 12:11:34.312186 containerd[1676]: time="2025-01-17T12:11:34.311808781Z" level=info msg="StartContainer for \"ea1165f664083b76f40a9302282a221dcfa439eb853555852e17b3440348885b\" returns successfully" Jan 17 12:11:35.245248 kubelet[2817]: I0117 12:11:35.245212 2817 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-4140a712f6" Jan 17 12:11:36.333569 kubelet[2817]: E0117 12:11:36.333531 2817 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-4140a712f6\" not found" node="ci-4081.3.0-a-4140a712f6" Jan 17 12:11:36.446538 kubelet[2817]: I0117 12:11:36.446498 2817 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-a-4140a712f6" Jan 17 12:11:37.020353 kubelet[2817]: I0117 12:11:37.020324 2817 apiserver.go:52] "Watching apiserver" Jan 17 12:11:37.032022 kubelet[2817]: I0117 12:11:37.031979 2817 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 12:11:37.091683 kubelet[2817]: E0117 12:11:37.091463 2817 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-4140a712f6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:38.089196 kubelet[2817]: W0117 12:11:38.088904 2817 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:11:38.734260 systemd[1]: Reloading requested from client PID 3085 ('systemctl') (unit session-9.scope)... Jan 17 12:11:38.734277 systemd[1]: Reloading... Jan 17 12:11:38.827120 zram_generator::config[3128]: No configuration found. Jan 17 12:11:38.926517 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:11:39.032743 systemd[1]: Reloading finished in 298 ms. Jan 17 12:11:39.073689 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:11:39.085180 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:11:39.085364 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:11:39.085405 systemd[1]: kubelet.service: Consumed 2.037s CPU time, 115.7M memory peak, 0B memory swap peak. Jan 17 12:11:39.091091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:11:39.274910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:11:39.285256 (kubelet)[3189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:11:39.329464 kubelet[3189]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:11:39.329464 kubelet[3189]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:11:39.329464 kubelet[3189]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:11:39.329464 kubelet[3189]: I0117 12:11:39.329351 3189 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:11:39.538532 kubelet[3189]: I0117 12:11:39.335445 3189 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 17 12:11:39.538532 kubelet[3189]: I0117 12:11:39.335468 3189 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:11:39.538532 kubelet[3189]: I0117 12:11:39.335656 3189 server.go:929] "Client rotation is on, will bootstrap in background" Jan 17 12:11:39.538532 kubelet[3189]: I0117 12:11:39.538341 3189 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:11:39.543941 kubelet[3189]: I0117 12:11:39.543570 3189 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:11:39.550095 kubelet[3189]: E0117 12:11:39.550021 3189 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 12:11:39.550393 kubelet[3189]: I0117 12:11:39.550361 3189 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 12:11:39.555236 kubelet[3189]: I0117 12:11:39.555177 3189 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:11:39.555356 kubelet[3189]: I0117 12:11:39.555294 3189 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 17 12:11:39.555421 kubelet[3189]: I0117 12:11:39.555388 3189 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:11:39.555567 kubelet[3189]: I0117 12:11:39.555415 3189 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-4140a712f6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 12:11:39.555653 kubelet[3189]: I0117 12:11:39.555575 3189 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:11:39.555653 kubelet[3189]: I0117 12:11:39.555584 3189 container_manager_linux.go:300] "Creating device plugin manager" Jan 17 12:11:39.555653 kubelet[3189]: I0117 12:11:39.555611 3189 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:11:39.555741 kubelet[3189]: I0117 12:11:39.555705 3189 kubelet.go:408] "Attempting to sync node with API server" Jan 17 12:11:39.555741 kubelet[3189]: I0117 12:11:39.555717 3189 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:11:39.556706 kubelet[3189]: I0117 12:11:39.555747 3189 kubelet.go:314] "Adding apiserver pod source" Jan 17 12:11:39.556706 kubelet[3189]: I0117 12:11:39.555756 3189 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:11:39.559351 kubelet[3189]: I0117 12:11:39.559328 3189 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:11:39.559810 kubelet[3189]: I0117 12:11:39.559779 3189 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:11:39.560165 kubelet[3189]: I0117 12:11:39.560130 3189 server.go:1269] "Started kubelet" Jan 17 12:11:39.562860 kubelet[3189]: I0117 12:11:39.561691 3189 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:11:39.562860 kubelet[3189]: I0117 12:11:39.561932 3189 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:11:39.562860 kubelet[3189]: I0117 12:11:39.561978 3189 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:11:39.563370 kubelet[3189]: I0117 12:11:39.563356 3189 server.go:460] "Adding debug handlers to kubelet server" Jan 17 12:11:39.563733 kubelet[3189]: I0117 12:11:39.563707 3189 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:11:39.566772 kubelet[3189]: I0117 12:11:39.566749 3189 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 12:11:39.569413 kubelet[3189]: I0117 12:11:39.569396 3189 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 17 12:11:39.570843 kubelet[3189]: I0117 12:11:39.570825 3189 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 12:11:39.571039 kubelet[3189]: I0117 12:11:39.571026 3189 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:11:39.574007 kubelet[3189]: E0117 12:11:39.573988 3189 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-4140a712f6\" not found" Jan 17 12:11:39.578872 kubelet[3189]: I0117 12:11:39.578846 3189 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:11:39.579851 kubelet[3189]: I0117 12:11:39.579631 3189 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:11:39.579851 kubelet[3189]: I0117 12:11:39.579649 3189 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:11:39.579851 kubelet[3189]: I0117 12:11:39.579661 3189 kubelet.go:2321] "Starting kubelet main sync loop" Jan 17 12:11:39.579851 kubelet[3189]: E0117 12:11:39.579693 3189 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:11:39.581505 kubelet[3189]: I0117 12:11:39.581485 3189 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:11:39.581916 kubelet[3189]: I0117 12:11:39.581640 3189 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:11:39.582878 kubelet[3189]: E0117 12:11:39.582200 3189 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:11:39.603061 kubelet[3189]: I0117 12:11:39.603043 3189 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:11:39.650287 kubelet[3189]: I0117 12:11:39.650264 3189 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:11:39.650429 kubelet[3189]: I0117 12:11:39.650418 3189 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:11:39.650490 kubelet[3189]: I0117 12:11:39.650483 3189 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:11:39.650659 kubelet[3189]: I0117 12:11:39.650646 3189 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:11:39.650724 kubelet[3189]: I0117 12:11:39.650703 3189 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:11:39.650774 kubelet[3189]: I0117 12:11:39.650766 3189 policy_none.go:49] "None policy: Start" Jan 17 12:11:39.651499 kubelet[3189]: I0117 12:11:39.651479 3189 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:11:39.651561 kubelet[3189]: I0117 12:11:39.651520 3189 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:11:39.651676 kubelet[3189]: I0117 12:11:39.651658 3189 state_mem.go:75] "Updated machine memory state" Jan 17 12:11:39.655727 kubelet[3189]: I0117 12:11:39.655703 3189 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:11:39.656188 kubelet[3189]: I0117 12:11:39.655867 3189 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 12:11:39.656188 kubelet[3189]: I0117 12:11:39.655885 3189 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:11:39.656188 kubelet[3189]: I0117 12:11:39.656063 3189 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:11:39.695536 kubelet[3189]: W0117 12:11:39.695498 3189 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:11:39.695615 kubelet[3189]: E0117 12:11:39.695554 3189 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-4140a712f6\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:39.696362 kubelet[3189]: W0117 12:11:39.696337 3189 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:11:39.696942 kubelet[3189]: W0117 12:11:39.696922 3189 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:11:39.759764 kubelet[3189]: I0117 12:11:39.759022 3189 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-4140a712f6" Jan 17 12:11:39.760208 sudo[3221]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 12:11:39.760486 sudo[3221]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 12:11:39.777689 kubelet[3189]: I0117 12:11:39.776730 3189 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.0-a-4140a712f6" Jan 17 12:11:39.777689 kubelet[3189]: I0117 12:11:39.776915 3189 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-a-4140a712f6" Jan 17 12:11:39.872216 kubelet[3189]: I0117 12:11:39.872172 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb2b777e0c41d21f357e9b0ff3850bc0-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-4140a712f6\" (UID: \"fb2b777e0c41d21f357e9b0ff3850bc0\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:39.872216 kubelet[3189]: I0117 12:11:39.872213 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d365211104e947f7a615f326eac17c6d-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-4140a712f6\" (UID: \"d365211104e947f7a615f326eac17c6d\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:39.872380 kubelet[3189]: I0117 12:11:39.872237 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d365211104e947f7a615f326eac17c6d-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-4140a712f6\" (UID: \"d365211104e947f7a615f326eac17c6d\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:39.872380 kubelet[3189]: I0117 12:11:39.872253 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52bbf48484c22f56ae70934d01c007a4-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-4140a712f6\" (UID: \"52bbf48484c22f56ae70934d01c007a4\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:39.872380 kubelet[3189]: I0117 12:11:39.872268 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb2b777e0c41d21f357e9b0ff3850bc0-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-4140a712f6\" (UID: \"fb2b777e0c41d21f357e9b0ff3850bc0\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:39.872380 kubelet[3189]: I0117 12:11:39.872283 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb2b777e0c41d21f357e9b0ff3850bc0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-4140a712f6\" (UID: \"fb2b777e0c41d21f357e9b0ff3850bc0\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:39.872380 kubelet[3189]: I0117 12:11:39.872300 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d365211104e947f7a615f326eac17c6d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-4140a712f6\" (UID: \"d365211104e947f7a615f326eac17c6d\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:39.872501 kubelet[3189]: I0117 12:11:39.872314 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d365211104e947f7a615f326eac17c6d-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-4140a712f6\" (UID: \"d365211104e947f7a615f326eac17c6d\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:39.872501 kubelet[3189]: I0117 12:11:39.872331 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d365211104e947f7a615f326eac17c6d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-4140a712f6\" (UID: \"d365211104e947f7a615f326eac17c6d\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4140a712f6" Jan 17 12:11:40.215619 sudo[3221]: pam_unix(sudo:session): session closed for user root Jan 17 12:11:40.557490 kubelet[3189]: I0117 12:11:40.557297 3189 apiserver.go:52] "Watching apiserver" Jan 17 12:11:40.571643 kubelet[3189]: I0117 12:11:40.571594 3189 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 12:11:40.667334 kubelet[3189]: I0117 12:11:40.667083 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-4140a712f6" podStartSLOduration=2.667067793 podStartE2EDuration="2.667067793s" podCreationTimestamp="2025-01-17 12:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:11:40.656930211 +0000 UTC m=+1.368754253" watchObservedRunningTime="2025-01-17 12:11:40.667067793 +0000 UTC m=+1.378891835" Jan 17 12:11:40.677833 kubelet[3189]: I0117 12:11:40.677787 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-4140a712f6" podStartSLOduration=1.6777757370000002 podStartE2EDuration="1.677775737s" podCreationTimestamp="2025-01-17 12:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:11:40.667258874 +0000 UTC m=+1.379082916" watchObservedRunningTime="2025-01-17 12:11:40.677775737 +0000 UTC m=+1.389599779" Jan 17 12:11:40.690064 kubelet[3189]: I0117 12:11:40.689962 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4140a712f6" podStartSLOduration=1.689938484 podStartE2EDuration="1.689938484s" podCreationTimestamp="2025-01-17 12:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:11:40.678540899 +0000 UTC m=+1.390364941" watchObservedRunningTime="2025-01-17 12:11:40.689938484 +0000 UTC m=+1.401762526" Jan 17 12:11:42.228438 sudo[2337]: pam_unix(sudo:session): session closed for user root Jan 17 12:11:42.309407 sshd[2334]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:42.313148 systemd[1]: sshd@6-10.200.20.31:22-10.200.16.10:57662.service: Deactivated successfully. Jan 17 12:11:42.315274 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:11:42.315519 systemd[1]: session-9.scope: Consumed 7.049s CPU time, 150.1M memory peak, 0B memory swap peak. Jan 17 12:11:42.316337 systemd-logind[1641]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:11:42.317578 systemd-logind[1641]: Removed session 9. Jan 17 12:11:44.122768 kubelet[3189]: I0117 12:11:44.122737 3189 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:11:44.123262 containerd[1676]: time="2025-01-17T12:11:44.123109951Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:11:44.123427 kubelet[3189]: I0117 12:11:44.123302 3189 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:11:45.020577 kubelet[3189]: W0117 12:11:45.020529 3189 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.0-a-4140a712f6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-a-4140a712f6' and this object Jan 17 12:11:45.020707 kubelet[3189]: E0117 12:11:45.020586 3189 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.0-a-4140a712f6\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.0-a-4140a712f6' and this object" logger="UnhandledError" Jan 17 12:11:45.020707 kubelet[3189]: W0117 12:11:45.020642 3189 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081.3.0-a-4140a712f6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-a-4140a712f6' and this object Jan 17 12:11:45.020707 kubelet[3189]: E0117 12:11:45.020654 3189 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4081.3.0-a-4140a712f6\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.0-a-4140a712f6' and this object" logger="UnhandledError" Jan 17 12:11:45.026340 systemd[1]: Created slice kubepods-besteffort-pod24e02f4d_b8cd_455b_9022_2a5f4180d9be.slice - libcontainer container kubepods-besteffort-pod24e02f4d_b8cd_455b_9022_2a5f4180d9be.slice. Jan 17 12:11:45.038702 systemd[1]: Created slice kubepods-burstable-pod826fd662_2fdf_4d37_9506_5a1edd15681a.slice - libcontainer container kubepods-burstable-pod826fd662_2fdf_4d37_9506_5a1edd15681a.slice. Jan 17 12:11:45.202120 kubelet[3189]: I0117 12:11:45.202072 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24e02f4d-b8cd-455b-9022-2a5f4180d9be-xtables-lock\") pod \"kube-proxy-kz2sh\" (UID: \"24e02f4d-b8cd-455b-9022-2a5f4180d9be\") " pod="kube-system/kube-proxy-kz2sh" Jan 17 12:11:45.202120 kubelet[3189]: I0117 12:11:45.202122 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-cilium-cgroup\") pod \"cilium-6lt68\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " pod="kube-system/cilium-6lt68" Jan 17 12:11:45.202572 kubelet[3189]: I0117 12:11:45.202194 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/826fd662-2fdf-4d37-9506-5a1edd15681a-hubble-tls\") pod \"cilium-6lt68\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " pod="kube-system/cilium-6lt68" Jan 17 12:11:45.202967 kubelet[3189]: I0117 12:11:45.202929 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-host-proc-sys-net\") pod \"cilium-6lt68\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " pod="kube-system/cilium-6lt68" Jan 17 12:11:45.203034 kubelet[3189]: I0117 12:11:45.202972 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/24e02f4d-b8cd-455b-9022-2a5f4180d9be-kube-proxy\") pod \"kube-proxy-kz2sh\" (UID: \"24e02f4d-b8cd-455b-9022-2a5f4180d9be\") " pod="kube-system/kube-proxy-kz2sh" Jan 17 12:11:45.203034 kubelet[3189]: I0117 12:11:45.203005 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24e02f4d-b8cd-455b-9022-2a5f4180d9be-lib-modules\") pod \"kube-proxy-kz2sh\" (UID: \"24e02f4d-b8cd-455b-9022-2a5f4180d9be\") " pod="kube-system/kube-proxy-kz2sh" Jan 17 12:11:45.203034 kubelet[3189]: I0117 12:11:45.203021 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-cni-path\") pod \"cilium-6lt68\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " pod="kube-system/cilium-6lt68" Jan 17 12:11:45.203111 kubelet[3189]: I0117 12:11:45.203035 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-bpf-maps\") pod \"cilium-6lt68\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " pod="kube-system/cilium-6lt68" Jan 17 12:11:45.203111 kubelet[3189]: I0117 12:11:45.203050 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-hostproc\") pod \"cilium-6lt68\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " pod="kube-system/cilium-6lt68" Jan 17 12:11:45.203241 kubelet[3189]: I0117 12:11:45.203213 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-xtables-lock\") pod \"cilium-6lt68\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " pod="kube-system/cilium-6lt68" Jan 17 12:11:45.204905 kubelet[3189]: I0117 12:11:45.204866 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8rs9\" (UniqueName: \"kubernetes.io/projected/826fd662-2fdf-4d37-9506-5a1edd15681a-kube-api-access-s8rs9\") pod \"cilium-6lt68\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " pod="kube-system/cilium-6lt68" Jan 17 12:11:45.205033 kubelet[3189]: I0117 12:11:45.205007 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-etc-cni-netd\") pod \"cilium-6lt68\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " pod="kube-system/cilium-6lt68" Jan 17 12:11:45.205078 kubelet[3189]: I0117 12:11:45.205035 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-lib-modules\") pod \"cilium-6lt68\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " pod="kube-system/cilium-6lt68" Jan 17 12:11:45.205078 kubelet[3189]: I0117 12:11:45.205051 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/826fd662-2fdf-4d37-9506-5a1edd15681a-cilium-config-path\") pod \"cilium-6lt68\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " pod="kube-system/cilium-6lt68" Jan 17 12:11:45.205185 kubelet[3189]: I0117 12:11:45.205162 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-host-proc-sys-kernel\") pod \"cilium-6lt68\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " pod="kube-system/cilium-6lt68" Jan 17 12:11:45.205220 kubelet[3189]: I0117 12:11:45.205190 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgv2x\" (UniqueName: \"kubernetes.io/projected/24e02f4d-b8cd-455b-9022-2a5f4180d9be-kube-api-access-rgv2x\") pod \"kube-proxy-kz2sh\" (UID: \"24e02f4d-b8cd-455b-9022-2a5f4180d9be\") " pod="kube-system/kube-proxy-kz2sh" Jan 17 12:11:45.205220 kubelet[3189]: I0117 12:11:45.205208 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-cilium-run\") pod \"cilium-6lt68\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " pod="kube-system/cilium-6lt68" Jan 17 12:11:45.205337 kubelet[3189]: I0117 12:11:45.205316 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/826fd662-2fdf-4d37-9506-5a1edd15681a-clustermesh-secrets\") pod \"cilium-6lt68\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " pod="kube-system/cilium-6lt68" Jan 17 12:11:45.210543 systemd[1]: Created slice kubepods-besteffort-pod091f2026_32e8_4cdc_9688_ec6cc9423060.slice - libcontainer container kubepods-besteffort-pod091f2026_32e8_4cdc_9688_ec6cc9423060.slice. Jan 17 12:11:45.306274 kubelet[3189]: I0117 12:11:45.305888 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/091f2026-32e8-4cdc-9688-ec6cc9423060-cilium-config-path\") pod \"cilium-operator-5d85765b45-llt8v\" (UID: \"091f2026-32e8-4cdc-9688-ec6cc9423060\") " pod="kube-system/cilium-operator-5d85765b45-llt8v" Jan 17 12:11:45.306274 kubelet[3189]: I0117 12:11:45.305941 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gqzz\" (UniqueName: \"kubernetes.io/projected/091f2026-32e8-4cdc-9688-ec6cc9423060-kube-api-access-9gqzz\") pod \"cilium-operator-5d85765b45-llt8v\" (UID: \"091f2026-32e8-4cdc-9688-ec6cc9423060\") " pod="kube-system/cilium-operator-5d85765b45-llt8v" Jan 17 12:11:46.327115 kubelet[3189]: E0117 12:11:46.326954 3189 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:11:46.327115 kubelet[3189]: E0117 12:11:46.327018 3189 projected.go:194] Error preparing data for projected volume kube-api-access-rgv2x for pod kube-system/kube-proxy-kz2sh: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:11:46.327115 kubelet[3189]: E0117 12:11:46.327099 3189 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24e02f4d-b8cd-455b-9022-2a5f4180d9be-kube-api-access-rgv2x podName:24e02f4d-b8cd-455b-9022-2a5f4180d9be nodeName:}" failed. No retries permitted until 2025-01-17 12:11:46.827078897 +0000 UTC m=+7.538902939 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rgv2x" (UniqueName: "kubernetes.io/projected/24e02f4d-b8cd-455b-9022-2a5f4180d9be-kube-api-access-rgv2x") pod "kube-proxy-kz2sh" (UID: "24e02f4d-b8cd-455b-9022-2a5f4180d9be") : failed to sync configmap cache: timed out waiting for the condition Jan 17 12:11:46.327634 kubelet[3189]: E0117 12:11:46.327230 3189 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:11:46.327634 kubelet[3189]: E0117 12:11:46.327243 3189 projected.go:194] Error preparing data for projected volume kube-api-access-s8rs9 for pod kube-system/cilium-6lt68: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:11:46.327634 kubelet[3189]: E0117 12:11:46.327266 3189 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/826fd662-2fdf-4d37-9506-5a1edd15681a-kube-api-access-s8rs9 podName:826fd662-2fdf-4d37-9506-5a1edd15681a nodeName:}" failed. No retries permitted until 2025-01-17 12:11:46.827258618 +0000 UTC m=+7.539082620 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s8rs9" (UniqueName: "kubernetes.io/projected/826fd662-2fdf-4d37-9506-5a1edd15681a-kube-api-access-s8rs9") pod "cilium-6lt68" (UID: "826fd662-2fdf-4d37-9506-5a1edd15681a") : failed to sync configmap cache: timed out waiting for the condition Jan 17 12:11:46.415132 kubelet[3189]: E0117 12:11:46.415076 3189 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:11:46.415132 kubelet[3189]: E0117 12:11:46.415114 3189 projected.go:194] Error preparing data for projected volume kube-api-access-9gqzz for pod kube-system/cilium-operator-5d85765b45-llt8v: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:11:46.415277 kubelet[3189]: E0117 12:11:46.415161 3189 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/091f2026-32e8-4cdc-9688-ec6cc9423060-kube-api-access-9gqzz podName:091f2026-32e8-4cdc-9688-ec6cc9423060 nodeName:}" failed. No retries permitted until 2025-01-17 12:11:46.915144759 +0000 UTC m=+7.626968801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9gqzz" (UniqueName: "kubernetes.io/projected/091f2026-32e8-4cdc-9688-ec6cc9423060-kube-api-access-9gqzz") pod "cilium-operator-5d85765b45-llt8v" (UID: "091f2026-32e8-4cdc-9688-ec6cc9423060") : failed to sync configmap cache: timed out waiting for the condition Jan 17 12:11:47.134996 containerd[1676]: time="2025-01-17T12:11:47.134948931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kz2sh,Uid:24e02f4d-b8cd-455b-9022-2a5f4180d9be,Namespace:kube-system,Attempt:0,}" Jan 17 12:11:47.141164 containerd[1676]: time="2025-01-17T12:11:47.140953746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6lt68,Uid:826fd662-2fdf-4d37-9506-5a1edd15681a,Namespace:kube-system,Attempt:0,}" Jan 17 12:11:47.197607 containerd[1676]: time="2025-01-17T12:11:47.193300317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:11:47.197607 containerd[1676]: time="2025-01-17T12:11:47.193379358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:11:47.197607 containerd[1676]: time="2025-01-17T12:11:47.193393838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:47.197607 containerd[1676]: time="2025-01-17T12:11:47.193488518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:47.211944 containerd[1676]: time="2025-01-17T12:11:47.211342803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:11:47.211944 containerd[1676]: time="2025-01-17T12:11:47.211395003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:11:47.211944 containerd[1676]: time="2025-01-17T12:11:47.211410323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:47.211944 containerd[1676]: time="2025-01-17T12:11:47.211667564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:47.217949 systemd[1]: Started cri-containerd-4d4a4c280830ebfb1225492bd6bfc21f6bd30296f8a6f85123885ee646c7d68e.scope - libcontainer container 4d4a4c280830ebfb1225492bd6bfc21f6bd30296f8a6f85123885ee646c7d68e. Jan 17 12:11:47.225590 systemd[1]: Started cri-containerd-e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f.scope - libcontainer container e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f. Jan 17 12:11:47.246463 containerd[1676]: time="2025-01-17T12:11:47.246382531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kz2sh,Uid:24e02f4d-b8cd-455b-9022-2a5f4180d9be,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d4a4c280830ebfb1225492bd6bfc21f6bd30296f8a6f85123885ee646c7d68e\"" Jan 17 12:11:47.254074 containerd[1676]: time="2025-01-17T12:11:47.253749910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6lt68,Uid:826fd662-2fdf-4d37-9506-5a1edd15681a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f\"" Jan 17 12:11:47.254074 containerd[1676]: time="2025-01-17T12:11:47.253888790Z" level=info msg="CreateContainer within sandbox \"4d4a4c280830ebfb1225492bd6bfc21f6bd30296f8a6f85123885ee646c7d68e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:11:47.258051 containerd[1676]: time="2025-01-17T12:11:47.257755360Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 12:11:47.303337 containerd[1676]: time="2025-01-17T12:11:47.303292994Z" level=info msg="CreateContainer within sandbox \"4d4a4c280830ebfb1225492bd6bfc21f6bd30296f8a6f85123885ee646c7d68e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"04b5cb30445f85fdc305f925e1b81a20dfb59e324d8292a91183d0cd5e691b97\"" Jan 17 12:11:47.304612 containerd[1676]: time="2025-01-17T12:11:47.304323237Z" level=info msg="StartContainer for \"04b5cb30445f85fdc305f925e1b81a20dfb59e324d8292a91183d0cd5e691b97\"" Jan 17 12:11:47.315134 containerd[1676]: time="2025-01-17T12:11:47.315095984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-llt8v,Uid:091f2026-32e8-4cdc-9688-ec6cc9423060,Namespace:kube-system,Attempt:0,}" Jan 17 12:11:47.330948 systemd[1]: Started cri-containerd-04b5cb30445f85fdc305f925e1b81a20dfb59e324d8292a91183d0cd5e691b97.scope - libcontainer container 04b5cb30445f85fdc305f925e1b81a20dfb59e324d8292a91183d0cd5e691b97. Jan 17 12:11:47.363528 containerd[1676]: time="2025-01-17T12:11:47.363329905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:11:47.363528 containerd[1676]: time="2025-01-17T12:11:47.363388785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:11:47.363742 containerd[1676]: time="2025-01-17T12:11:47.363411065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:47.363742 containerd[1676]: time="2025-01-17T12:11:47.363483466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:47.373624 containerd[1676]: time="2025-01-17T12:11:47.373032130Z" level=info msg="StartContainer for \"04b5cb30445f85fdc305f925e1b81a20dfb59e324d8292a91183d0cd5e691b97\" returns successfully" Jan 17 12:11:47.384957 systemd[1]: Started cri-containerd-f67d065e8f0f09f4b084ddbabd8f7821196a2e2cf2ed82a8d13efb768398fd82.scope - libcontainer container f67d065e8f0f09f4b084ddbabd8f7821196a2e2cf2ed82a8d13efb768398fd82. Jan 17 12:11:47.419928 containerd[1676]: time="2025-01-17T12:11:47.419856488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-llt8v,Uid:091f2026-32e8-4cdc-9688-ec6cc9423060,Namespace:kube-system,Attempt:0,} returns sandbox id \"f67d065e8f0f09f4b084ddbabd8f7821196a2e2cf2ed82a8d13efb768398fd82\"" Jan 17 12:11:47.662624 kubelet[3189]: I0117 12:11:47.662450 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kz2sh" podStartSLOduration=3.662431858 podStartE2EDuration="3.662431858s" podCreationTimestamp="2025-01-17 12:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:11:47.660425133 +0000 UTC m=+8.372249175" watchObservedRunningTime="2025-01-17 12:11:47.662431858 +0000 UTC m=+8.374255900" Jan 17 12:11:51.124546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1808398551.mount: Deactivated successfully. Jan 17 12:11:54.402470 containerd[1676]: time="2025-01-17T12:11:54.402404582Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:54.404865 containerd[1676]: time="2025-01-17T12:11:54.404671947Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651538" Jan 17 12:11:54.407305 containerd[1676]: time="2025-01-17T12:11:54.407255152Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:11:54.409079 containerd[1676]: time="2025-01-17T12:11:54.408947116Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.151098076s" Jan 17 12:11:54.409079 containerd[1676]: time="2025-01-17T12:11:54.408987516Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 17 12:11:54.410968 containerd[1676]: time="2025-01-17T12:11:54.410915080Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 12:11:54.411822 containerd[1676]: time="2025-01-17T12:11:54.411721762Z" level=info msg="CreateContainer within sandbox \"e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:11:54.457377 containerd[1676]: time="2025-01-17T12:11:54.457313863Z" level=info msg="CreateContainer within sandbox \"e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd\"" Jan 17 12:11:54.458059 containerd[1676]: time="2025-01-17T12:11:54.457915824Z" level=info msg="StartContainer for \"cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd\"" Jan 17 12:11:54.484952 systemd[1]: Started cri-containerd-cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd.scope - libcontainer container cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd. Jan 17 12:11:54.508920 update_engine[1650]: I20250117 12:11:54.508872 1650 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 17 12:11:54.508920 update_engine[1650]: I20250117 12:11:54.508914 1650 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 17 12:11:54.509562 update_engine[1650]: I20250117 12:11:54.509040 1650 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 17 12:11:54.509562 update_engine[1650]: I20250117 12:11:54.509366 1650 omaha_request_params.cc:62] Current group set to lts Jan 17 12:11:54.509562 update_engine[1650]: I20250117 12:11:54.509454 1650 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 17 12:11:54.509562 update_engine[1650]: I20250117 12:11:54.509464 1650 update_attempter.cc:643] Scheduling an action processor start. Jan 17 12:11:54.509562 update_engine[1650]: I20250117 12:11:54.509476 1650 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 12:11:54.509562 update_engine[1650]: I20250117 12:11:54.509505 1650 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 17 12:11:54.509562 update_engine[1650]: I20250117 12:11:54.509549 1650 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 12:11:54.509562 update_engine[1650]: I20250117 12:11:54.509557 1650 omaha_request_action.cc:272] Request: Jan 17 12:11:54.509562 update_engine[1650]: Jan 17 12:11:54.509562 update_engine[1650]: Jan 17 12:11:54.509562 update_engine[1650]: Jan 17 12:11:54.509562 update_engine[1650]: Jan 17 12:11:54.509562 update_engine[1650]: Jan 17 12:11:54.509562 update_engine[1650]: Jan 17 12:11:54.509562 update_engine[1650]: Jan 17 12:11:54.509562 update_engine[1650]: Jan 17 12:11:54.509562 update_engine[1650]: I20250117 12:11:54.509562 1650 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:11:54.511768 update_engine[1650]: I20250117 12:11:54.510533 1650 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:11:54.511768 update_engine[1650]: I20250117 12:11:54.511070 1650 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:11:54.511886 locksmithd[1734]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 17 12:11:54.512045 containerd[1676]: time="2025-01-17T12:11:54.510580461Z" level=info msg="StartContainer for \"cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd\" returns successfully" Jan 17 12:11:54.519186 systemd[1]: cri-containerd-cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd.scope: Deactivated successfully. Jan 17 12:11:54.600227 update_engine[1650]: E20250117 12:11:54.600167 1650 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:11:54.731024 update_engine[1650]: I20250117 12:11:54.600265 1650 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 17 12:11:55.441128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd-rootfs.mount: Deactivated successfully. Jan 17 12:11:55.564262 containerd[1676]: time="2025-01-17T12:11:55.563720350Z" level=info msg="shim disconnected" id=cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd namespace=k8s.io Jan 17 12:11:55.564262 containerd[1676]: time="2025-01-17T12:11:55.564075111Z" level=warning msg="cleaning up after shim disconnected" id=cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd namespace=k8s.io Jan 17 12:11:55.564262 containerd[1676]: time="2025-01-17T12:11:55.564088631Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:11:55.672340 containerd[1676]: time="2025-01-17T12:11:55.671890389Z" level=info msg="CreateContainer within sandbox \"e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:11:55.708345 containerd[1676]: time="2025-01-17T12:11:55.708301670Z" level=info msg="CreateContainer within sandbox \"e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c\"" Jan 17 12:11:55.709640 containerd[1676]: time="2025-01-17T12:11:55.709323952Z" level=info msg="StartContainer for \"63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c\"" Jan 17 12:11:55.734095 systemd[1]: Started cri-containerd-63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c.scope - libcontainer container 63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c. Jan 17 12:11:55.756871 containerd[1676]: time="2025-01-17T12:11:55.756748697Z" level=info msg="StartContainer for \"63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c\" returns successfully" Jan 17 12:11:55.767069 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:11:55.767586 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:11:55.767647 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:11:55.776141 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:11:55.776562 systemd[1]: cri-containerd-63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c.scope: Deactivated successfully. Jan 17 12:11:55.788374 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:11:55.809276 containerd[1676]: time="2025-01-17T12:11:55.809127653Z" level=info msg="shim disconnected" id=63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c namespace=k8s.io Jan 17 12:11:55.809420 containerd[1676]: time="2025-01-17T12:11:55.809222613Z" level=warning msg="cleaning up after shim disconnected" id=63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c namespace=k8s.io Jan 17 12:11:55.809420 containerd[1676]: time="2025-01-17T12:11:55.809296613Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:11:56.441271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c-rootfs.mount: Deactivated successfully. Jan 17 12:11:56.674436 containerd[1676]: time="2025-01-17T12:11:56.674166646Z" level=info msg="CreateContainer within sandbox \"e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:11:56.711847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1098718423.mount: Deactivated successfully. Jan 17 12:11:56.723119 containerd[1676]: time="2025-01-17T12:11:56.723062474Z" level=info msg="CreateContainer within sandbox \"e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1\"" Jan 17 12:11:56.725015 containerd[1676]: time="2025-01-17T12:11:56.723717876Z" level=info msg="StartContainer for \"44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1\"" Jan 17 12:11:56.751938 systemd[1]: Started cri-containerd-44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1.scope - libcontainer container 44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1. Jan 17 12:11:56.775949 systemd[1]: cri-containerd-44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1.scope: Deactivated successfully. Jan 17 12:11:56.779304 containerd[1676]: time="2025-01-17T12:11:56.779263958Z" level=info msg="StartContainer for \"44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1\" returns successfully" Jan 17 12:11:56.808501 containerd[1676]: time="2025-01-17T12:11:56.808393023Z" level=info msg="shim disconnected" id=44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1 namespace=k8s.io Jan 17 12:11:56.808867 containerd[1676]: time="2025-01-17T12:11:56.808624623Z" level=warning msg="cleaning up after shim disconnected" id=44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1 namespace=k8s.io Jan 17 12:11:56.808867 containerd[1676]: time="2025-01-17T12:11:56.808643383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:11:56.817528 containerd[1676]: time="2025-01-17T12:11:56.817427123Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:11:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:11:57.441337 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1-rootfs.mount: Deactivated successfully. Jan 17 12:11:57.678361 containerd[1676]: time="2025-01-17T12:11:57.678237707Z" level=info msg="CreateContainer within sandbox \"e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:11:57.734996 containerd[1676]: time="2025-01-17T12:11:57.733669589Z" level=info msg="CreateContainer within sandbox \"e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b\"" Jan 17 12:11:57.734996 containerd[1676]: time="2025-01-17T12:11:57.734074950Z" level=info msg="StartContainer for \"762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b\"" Jan 17 12:11:57.756940 systemd[1]: Started cri-containerd-762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b.scope - libcontainer container 762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b. Jan 17 12:11:57.775401 systemd[1]: cri-containerd-762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b.scope: Deactivated successfully. Jan 17 12:11:57.780328 containerd[1676]: time="2025-01-17T12:11:57.780167452Z" level=info msg="StartContainer for \"762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b\" returns successfully" Jan 17 12:11:57.795074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b-rootfs.mount: Deactivated successfully. Jan 17 12:11:57.806808 containerd[1676]: time="2025-01-17T12:11:57.806738111Z" level=info msg="shim disconnected" id=762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b namespace=k8s.io Jan 17 12:11:57.807009 containerd[1676]: time="2025-01-17T12:11:57.806787831Z" level=warning msg="cleaning up after shim disconnected" id=762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b namespace=k8s.io Jan 17 12:11:57.807009 containerd[1676]: time="2025-01-17T12:11:57.806945511Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:11:58.682476 containerd[1676]: time="2025-01-17T12:11:58.682414328Z" level=info msg="CreateContainer within sandbox \"e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:11:58.716634 containerd[1676]: time="2025-01-17T12:11:58.716593523Z" level=info msg="CreateContainer within sandbox \"e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc\"" Jan 17 12:11:58.717986 containerd[1676]: time="2025-01-17T12:11:58.717130924Z" level=info msg="StartContainer for \"f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc\"" Jan 17 12:11:58.741954 systemd[1]: Started cri-containerd-f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc.scope - libcontainer container f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc. Jan 17 12:11:58.767099 containerd[1676]: time="2025-01-17T12:11:58.767057555Z" level=info msg="StartContainer for \"f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc\" returns successfully" Jan 17 12:11:58.852453 kubelet[3189]: I0117 12:11:58.852417 3189 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 17 12:11:58.890148 systemd[1]: Created slice kubepods-burstable-poda0bfa61e_ea5e_4233_b981_8ae294b39a3b.slice - libcontainer container kubepods-burstable-poda0bfa61e_ea5e_4233_b981_8ae294b39a3b.slice. Jan 17 12:11:58.899328 systemd[1]: Created slice kubepods-burstable-pod4242a826_de41_440f_9bc3_9861bd35bcfa.slice - libcontainer container kubepods-burstable-pod4242a826_de41_440f_9bc3_9861bd35bcfa.slice. Jan 17 12:11:58.984808 kubelet[3189]: I0117 12:11:58.984585 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxj94\" (UniqueName: \"kubernetes.io/projected/a0bfa61e-ea5e-4233-b981-8ae294b39a3b-kube-api-access-sxj94\") pod \"coredns-6f6b679f8f-9v8xc\" (UID: \"a0bfa61e-ea5e-4233-b981-8ae294b39a3b\") " pod="kube-system/coredns-6f6b679f8f-9v8xc" Jan 17 12:11:58.984808 kubelet[3189]: I0117 12:11:58.984632 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0bfa61e-ea5e-4233-b981-8ae294b39a3b-config-volume\") pod \"coredns-6f6b679f8f-9v8xc\" (UID: \"a0bfa61e-ea5e-4233-b981-8ae294b39a3b\") " pod="kube-system/coredns-6f6b679f8f-9v8xc" Jan 17 12:11:58.984808 kubelet[3189]: I0117 12:11:58.984785 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d62p\" (UniqueName: \"kubernetes.io/projected/4242a826-de41-440f-9bc3-9861bd35bcfa-kube-api-access-7d62p\") pod \"coredns-6f6b679f8f-jnczw\" (UID: \"4242a826-de41-440f-9bc3-9861bd35bcfa\") " pod="kube-system/coredns-6f6b679f8f-jnczw" Jan 17 12:11:58.984965 kubelet[3189]: I0117 12:11:58.984822 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4242a826-de41-440f-9bc3-9861bd35bcfa-config-volume\") pod \"coredns-6f6b679f8f-jnczw\" (UID: \"4242a826-de41-440f-9bc3-9861bd35bcfa\") " pod="kube-system/coredns-6f6b679f8f-jnczw" Jan 17 12:11:59.196680 containerd[1676]: time="2025-01-17T12:11:59.196584385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9v8xc,Uid:a0bfa61e-ea5e-4233-b981-8ae294b39a3b,Namespace:kube-system,Attempt:0,}" Jan 17 12:11:59.205911 containerd[1676]: time="2025-01-17T12:11:59.205335804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jnczw,Uid:4242a826-de41-440f-9bc3-9861bd35bcfa,Namespace:kube-system,Attempt:0,}" Jan 17 12:11:59.701149 kubelet[3189]: I0117 12:11:59.700624 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6lt68" podStartSLOduration=7.546668619 podStartE2EDuration="14.700608862s" podCreationTimestamp="2025-01-17 12:11:45 +0000 UTC" firstStartedPulling="2025-01-17 12:11:47.256215556 +0000 UTC m=+7.968039598" lastFinishedPulling="2025-01-17 12:11:54.410155799 +0000 UTC m=+15.121979841" observedRunningTime="2025-01-17 12:11:59.69965934 +0000 UTC m=+20.411483342" watchObservedRunningTime="2025-01-17 12:11:59.700608862 +0000 UTC m=+20.412432904" Jan 17 12:12:00.473220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3138791347.mount: Deactivated successfully. Jan 17 12:12:01.115511 containerd[1676]: time="2025-01-17T12:12:01.115451398Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:01.118816 containerd[1676]: time="2025-01-17T12:12:01.118775245Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138302" Jan 17 12:12:01.121302 containerd[1676]: time="2025-01-17T12:12:01.121260090Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:12:01.122695 containerd[1676]: time="2025-01-17T12:12:01.122584532Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 6.711633412s" Jan 17 12:12:01.122695 containerd[1676]: time="2025-01-17T12:12:01.122621373Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 17 12:12:01.124903 containerd[1676]: time="2025-01-17T12:12:01.124604577Z" level=info msg="CreateContainer within sandbox \"f67d065e8f0f09f4b084ddbabd8f7821196a2e2cf2ed82a8d13efb768398fd82\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 12:12:01.155951 containerd[1676]: time="2025-01-17T12:12:01.155910482Z" level=info msg="CreateContainer within sandbox \"f67d065e8f0f09f4b084ddbabd8f7821196a2e2cf2ed82a8d13efb768398fd82\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508\"" Jan 17 12:12:01.157101 containerd[1676]: time="2025-01-17T12:12:01.156321242Z" level=info msg="StartContainer for \"5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508\"" Jan 17 12:12:01.183992 systemd[1]: Started cri-containerd-5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508.scope - libcontainer container 5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508. Jan 17 12:12:01.206511 containerd[1676]: time="2025-01-17T12:12:01.206462386Z" level=info msg="StartContainer for \"5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508\" returns successfully" Jan 17 12:12:01.705474 kubelet[3189]: I0117 12:12:01.705407 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-llt8v" podStartSLOduration=3.00380594 podStartE2EDuration="16.705391502s" podCreationTimestamp="2025-01-17 12:11:45 +0000 UTC" firstStartedPulling="2025-01-17 12:11:47.421718092 +0000 UTC m=+8.133542134" lastFinishedPulling="2025-01-17 12:12:01.123303654 +0000 UTC m=+21.835127696" observedRunningTime="2025-01-17 12:12:01.7045277 +0000 UTC m=+22.416351742" watchObservedRunningTime="2025-01-17 12:12:01.705391502 +0000 UTC m=+22.417215544" Jan 17 12:12:04.511480 update_engine[1650]: I20250117 12:12:04.511409 1650 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:12:04.511839 update_engine[1650]: I20250117 12:12:04.511627 1650 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:12:04.511869 update_engine[1650]: I20250117 12:12:04.511828 1650 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:12:04.792408 update_engine[1650]: E20250117 12:12:04.792251 1650 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:12:04.792408 update_engine[1650]: I20250117 12:12:04.792366 1650 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 17 12:12:04.905654 systemd-networkd[1409]: cilium_host: Link UP Jan 17 12:12:04.905772 systemd-networkd[1409]: cilium_net: Link UP Jan 17 12:12:04.905915 systemd-networkd[1409]: cilium_net: Gained carrier Jan 17 12:12:04.906023 systemd-networkd[1409]: cilium_host: Gained carrier Jan 17 12:12:05.107859 systemd-networkd[1409]: cilium_vxlan: Link UP Jan 17 12:12:05.107865 systemd-networkd[1409]: cilium_vxlan: Gained carrier Jan 17 12:12:05.346935 systemd-networkd[1409]: cilium_host: Gained IPv6LL Jan 17 12:12:05.376064 kernel: NET: Registered PF_ALG protocol family Jan 17 12:12:05.554947 systemd-networkd[1409]: cilium_net: Gained IPv6LL Jan 17 12:12:06.016996 systemd-networkd[1409]: lxc_health: Link UP Jan 17 12:12:06.024276 systemd-networkd[1409]: lxc_health: Gained carrier Jan 17 12:12:06.301163 kernel: eth0: renamed from tmpa34b7 Jan 17 12:12:06.292372 systemd-networkd[1409]: lxc897742dec187: Link UP Jan 17 12:12:06.308598 systemd-networkd[1409]: lxc909e35041f20: Link UP Jan 17 12:12:06.318007 kernel: eth0: renamed from tmp55cab Jan 17 12:12:06.326743 systemd-networkd[1409]: lxc897742dec187: Gained carrier Jan 17 12:12:06.326947 systemd-networkd[1409]: cilium_vxlan: Gained IPv6LL Jan 17 12:12:06.329289 systemd-networkd[1409]: lxc909e35041f20: Gained carrier Jan 17 12:12:08.051009 systemd-networkd[1409]: lxc_health: Gained IPv6LL Jan 17 12:12:08.244019 systemd-networkd[1409]: lxc897742dec187: Gained IPv6LL Jan 17 12:12:08.244686 systemd-networkd[1409]: lxc909e35041f20: Gained IPv6LL Jan 17 12:12:09.227770 kubelet[3189]: I0117 12:12:09.227706 3189 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:12:09.793677 containerd[1676]: time="2025-01-17T12:12:09.792939047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:12:09.793677 containerd[1676]: time="2025-01-17T12:12:09.792995448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:12:09.793677 containerd[1676]: time="2025-01-17T12:12:09.793010128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:12:09.793677 containerd[1676]: time="2025-01-17T12:12:09.793095688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:12:09.806194 containerd[1676]: time="2025-01-17T12:12:09.806013233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:12:09.806194 containerd[1676]: time="2025-01-17T12:12:09.806124514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:12:09.806194 containerd[1676]: time="2025-01-17T12:12:09.806150474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:12:09.807237 containerd[1676]: time="2025-01-17T12:12:09.806844435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:12:09.835978 systemd[1]: Started cri-containerd-a34b7488037debd430a78be11fa2b099836b23f1bda3d47742fe44a21922b53e.scope - libcontainer container a34b7488037debd430a78be11fa2b099836b23f1bda3d47742fe44a21922b53e. Jan 17 12:12:09.853101 systemd[1]: Started cri-containerd-55cabb91c9bea2e976ed3f4f5043523347d26aeb9886d2e6720a0bd3e6ea2ea4.scope - libcontainer container 55cabb91c9bea2e976ed3f4f5043523347d26aeb9886d2e6720a0bd3e6ea2ea4. Jan 17 12:12:09.888897 containerd[1676]: time="2025-01-17T12:12:09.888852638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9v8xc,Uid:a0bfa61e-ea5e-4233-b981-8ae294b39a3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a34b7488037debd430a78be11fa2b099836b23f1bda3d47742fe44a21922b53e\"" Jan 17 12:12:09.895137 containerd[1676]: time="2025-01-17T12:12:09.895090051Z" level=info msg="CreateContainer within sandbox \"a34b7488037debd430a78be11fa2b099836b23f1bda3d47742fe44a21922b53e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:12:09.914630 containerd[1676]: time="2025-01-17T12:12:09.914588729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jnczw,Uid:4242a826-de41-440f-9bc3-9861bd35bcfa,Namespace:kube-system,Attempt:0,} returns sandbox id \"55cabb91c9bea2e976ed3f4f5043523347d26aeb9886d2e6720a0bd3e6ea2ea4\"" Jan 17 12:12:09.920785 containerd[1676]: time="2025-01-17T12:12:09.920745822Z" level=info msg="CreateContainer within sandbox \"55cabb91c9bea2e976ed3f4f5043523347d26aeb9886d2e6720a0bd3e6ea2ea4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:12:09.948894 containerd[1676]: time="2025-01-17T12:12:09.948856038Z" level=info msg="CreateContainer within sandbox \"a34b7488037debd430a78be11fa2b099836b23f1bda3d47742fe44a21922b53e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"98ec3afc41a7feb103e28c360e2cd1303ddb57dae4eb0f7eb8246ef6a98d2d39\"" Jan 17 12:12:09.949747 containerd[1676]: time="2025-01-17T12:12:09.949648159Z" level=info msg="StartContainer for \"98ec3afc41a7feb103e28c360e2cd1303ddb57dae4eb0f7eb8246ef6a98d2d39\"" Jan 17 12:12:09.970945 containerd[1676]: time="2025-01-17T12:12:09.970915081Z" level=info msg="CreateContainer within sandbox \"55cabb91c9bea2e976ed3f4f5043523347d26aeb9886d2e6720a0bd3e6ea2ea4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"294c567702c0df2188b4a171ed191ec5786a6bcdc46a2e3619137fb191f2476c\"" Jan 17 12:12:09.970948 systemd[1]: Started cri-containerd-98ec3afc41a7feb103e28c360e2cd1303ddb57dae4eb0f7eb8246ef6a98d2d39.scope - libcontainer container 98ec3afc41a7feb103e28c360e2cd1303ddb57dae4eb0f7eb8246ef6a98d2d39. Jan 17 12:12:09.972270 containerd[1676]: time="2025-01-17T12:12:09.971995644Z" level=info msg="StartContainer for \"294c567702c0df2188b4a171ed191ec5786a6bcdc46a2e3619137fb191f2476c\"" Jan 17 12:12:09.996978 systemd[1]: Started cri-containerd-294c567702c0df2188b4a171ed191ec5786a6bcdc46a2e3619137fb191f2476c.scope - libcontainer container 294c567702c0df2188b4a171ed191ec5786a6bcdc46a2e3619137fb191f2476c. Jan 17 12:12:10.015485 containerd[1676]: time="2025-01-17T12:12:10.015420890Z" level=info msg="StartContainer for \"98ec3afc41a7feb103e28c360e2cd1303ddb57dae4eb0f7eb8246ef6a98d2d39\" returns successfully" Jan 17 12:12:10.037109 containerd[1676]: time="2025-01-17T12:12:10.037064493Z" level=info msg="StartContainer for \"294c567702c0df2188b4a171ed191ec5786a6bcdc46a2e3619137fb191f2476c\" returns successfully" Jan 17 12:12:10.723419 kubelet[3189]: I0117 12:12:10.722789 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-jnczw" podStartSLOduration=25.722773537 podStartE2EDuration="25.722773537s" podCreationTimestamp="2025-01-17 12:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:12:10.722153576 +0000 UTC m=+31.433977578" watchObservedRunningTime="2025-01-17 12:12:10.722773537 +0000 UTC m=+31.434597579" Jan 17 12:12:10.738704 kubelet[3189]: I0117 12:12:10.738645 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-9v8xc" podStartSLOduration=25.738629609 podStartE2EDuration="25.738629609s" podCreationTimestamp="2025-01-17 12:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:12:10.737728287 +0000 UTC m=+31.449552369" watchObservedRunningTime="2025-01-17 12:12:10.738629609 +0000 UTC m=+31.450453651" Jan 17 12:12:15.512551 update_engine[1650]: I20250117 12:12:15.512461 1650 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:12:15.512917 update_engine[1650]: I20250117 12:12:15.512714 1650 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:12:15.513002 update_engine[1650]: I20250117 12:12:15.512967 1650 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:12:15.520231 update_engine[1650]: E20250117 12:12:15.520203 1650 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:12:15.520276 update_engine[1650]: I20250117 12:12:15.520252 1650 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 17 12:12:25.518384 update_engine[1650]: I20250117 12:12:25.518296 1650 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:12:25.518743 update_engine[1650]: I20250117 12:12:25.518573 1650 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:12:25.518817 update_engine[1650]: I20250117 12:12:25.518771 1650 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:12:25.821946 update_engine[1650]: E20250117 12:12:25.821877 1650 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:12:25.822084 update_engine[1650]: I20250117 12:12:25.821986 1650 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 12:12:25.822084 update_engine[1650]: I20250117 12:12:25.822002 1650 omaha_request_action.cc:617] Omaha request response: Jan 17 12:12:25.822130 update_engine[1650]: E20250117 12:12:25.822081 1650 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 17 12:12:25.822130 update_engine[1650]: I20250117 12:12:25.822098 1650 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 17 12:12:25.822130 update_engine[1650]: I20250117 12:12:25.822103 1650 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 12:12:25.822130 update_engine[1650]: I20250117 12:12:25.822108 1650 update_attempter.cc:306] Processing Done. Jan 17 12:12:25.822130 update_engine[1650]: E20250117 12:12:25.822125 1650 update_attempter.cc:619] Update failed. Jan 17 12:12:25.822227 update_engine[1650]: I20250117 12:12:25.822131 1650 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 17 12:12:25.822227 update_engine[1650]: I20250117 12:12:25.822135 1650 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 17 12:12:25.822227 update_engine[1650]: I20250117 12:12:25.822141 1650 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 17 12:12:25.822227 update_engine[1650]: I20250117 12:12:25.822206 1650 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 12:12:25.822301 update_engine[1650]: I20250117 12:12:25.822227 1650 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 12:12:25.822301 update_engine[1650]: I20250117 12:12:25.822232 1650 omaha_request_action.cc:272] Request: Jan 17 12:12:25.822301 update_engine[1650]: Jan 17 12:12:25.822301 update_engine[1650]: Jan 17 12:12:25.822301 update_engine[1650]: Jan 17 12:12:25.822301 update_engine[1650]: Jan 17 12:12:25.822301 update_engine[1650]: Jan 17 12:12:25.822301 update_engine[1650]: Jan 17 12:12:25.822301 update_engine[1650]: I20250117 12:12:25.822238 1650 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:12:25.822458 update_engine[1650]: I20250117 12:12:25.822373 1650 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:12:25.822625 update_engine[1650]: I20250117 12:12:25.822551 1650 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:12:25.822851 locksmithd[1734]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 17 12:12:25.844583 update_engine[1650]: E20250117 12:12:25.844528 1650 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:12:25.844676 update_engine[1650]: I20250117 12:12:25.844614 1650 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 12:12:25.844676 update_engine[1650]: I20250117 12:12:25.844631 1650 omaha_request_action.cc:617] Omaha request response: Jan 17 12:12:25.844676 update_engine[1650]: I20250117 12:12:25.844643 1650 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 12:12:25.844676 update_engine[1650]: I20250117 12:12:25.844652 1650 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 12:12:25.844676 update_engine[1650]: I20250117 12:12:25.844660 1650 update_attempter.cc:306] Processing Done. Jan 17 12:12:25.844676 update_engine[1650]: I20250117 12:12:25.844669 1650 update_attempter.cc:310] Error event sent. Jan 17 12:12:25.844850 update_engine[1650]: I20250117 12:12:25.844686 1650 update_check_scheduler.cc:74] Next update check in 48m32s Jan 17 12:12:25.845022 locksmithd[1734]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 17 12:14:02.258932 systemd[1]: Started sshd@7-10.200.20.31:22-10.200.16.10:56084.service - OpenSSH per-connection server daemon (10.200.16.10:56084). Jan 17 12:14:02.666225 sshd[4562]: Accepted publickey for core from 10.200.16.10 port 56084 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:14:02.667672 sshd[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:02.671586 systemd-logind[1641]: New session 10 of user core. Jan 17 12:14:02.683112 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:14:03.046026 sshd[4562]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:03.049506 systemd[1]: sshd@7-10.200.20.31:22-10.200.16.10:56084.service: Deactivated successfully. Jan 17 12:14:03.051701 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:14:03.052907 systemd-logind[1641]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:14:03.054270 systemd-logind[1641]: Removed session 10. Jan 17 12:14:08.128026 systemd[1]: Started sshd@8-10.200.20.31:22-10.200.16.10:47010.service - OpenSSH per-connection server daemon (10.200.16.10:47010). Jan 17 12:14:08.547382 sshd[4576]: Accepted publickey for core from 10.200.16.10 port 47010 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:14:08.548850 sshd[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:08.553252 systemd-logind[1641]: New session 11 of user core. Jan 17 12:14:08.559939 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:14:08.927032 sshd[4576]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:08.930319 systemd[1]: sshd@8-10.200.20.31:22-10.200.16.10:47010.service: Deactivated successfully. Jan 17 12:14:08.931972 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:14:08.932982 systemd-logind[1641]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:14:08.934495 systemd-logind[1641]: Removed session 11. Jan 17 12:14:14.017293 systemd[1]: Started sshd@9-10.200.20.31:22-10.200.16.10:47016.service - OpenSSH per-connection server daemon (10.200.16.10:47016). Jan 17 12:14:14.438497 sshd[4591]: Accepted publickey for core from 10.200.16.10 port 47016 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:14:14.439961 sshd[4591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:14.444600 systemd-logind[1641]: New session 12 of user core. Jan 17 12:14:14.451987 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:14:14.815859 sshd[4591]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:14.819168 systemd-logind[1641]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:14:14.819851 systemd[1]: sshd@9-10.200.20.31:22-10.200.16.10:47016.service: Deactivated successfully. Jan 17 12:14:14.821774 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:14:14.823098 systemd-logind[1641]: Removed session 12. Jan 17 12:14:19.899102 systemd[1]: Started sshd@10-10.200.20.31:22-10.200.16.10:34410.service - OpenSSH per-connection server daemon (10.200.16.10:34410). Jan 17 12:14:20.320490 sshd[4607]: Accepted publickey for core from 10.200.16.10 port 34410 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:14:20.321889 sshd[4607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:20.326358 systemd-logind[1641]: New session 13 of user core. Jan 17 12:14:20.335993 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:14:20.700595 sshd[4607]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:20.704164 systemd[1]: sshd@10-10.200.20.31:22-10.200.16.10:34410.service: Deactivated successfully. Jan 17 12:14:20.705946 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:14:20.707265 systemd-logind[1641]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:14:20.708617 systemd-logind[1641]: Removed session 13. Jan 17 12:14:20.781079 systemd[1]: Started sshd@11-10.200.20.31:22-10.200.16.10:34416.service - OpenSSH per-connection server daemon (10.200.16.10:34416). Jan 17 12:14:21.183356 sshd[4621]: Accepted publickey for core from 10.200.16.10 port 34416 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:14:21.184735 sshd[4621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:21.189112 systemd-logind[1641]: New session 14 of user core. Jan 17 12:14:21.192966 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:14:21.582106 sshd[4621]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:21.587038 systemd[1]: sshd@11-10.200.20.31:22-10.200.16.10:34416.service: Deactivated successfully. Jan 17 12:14:21.589908 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:14:21.590835 systemd-logind[1641]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:14:21.592193 systemd-logind[1641]: Removed session 14. Jan 17 12:14:21.655865 systemd[1]: Started sshd@12-10.200.20.31:22-10.200.16.10:34430.service - OpenSSH per-connection server daemon (10.200.16.10:34430). Jan 17 12:14:22.063821 sshd[4632]: Accepted publickey for core from 10.200.16.10 port 34430 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:14:22.065354 sshd[4632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:22.069272 systemd-logind[1641]: New session 15 of user core. Jan 17 12:14:22.079970 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:14:22.423230 sshd[4632]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:22.426400 systemd[1]: sshd@12-10.200.20.31:22-10.200.16.10:34430.service: Deactivated successfully. Jan 17 12:14:22.428089 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:14:22.428854 systemd-logind[1641]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:14:22.430086 systemd-logind[1641]: Removed session 15. Jan 17 12:14:27.499695 systemd[1]: Started sshd@13-10.200.20.31:22-10.200.16.10:54974.service - OpenSSH per-connection server daemon (10.200.16.10:54974). Jan 17 12:14:27.924555 sshd[4645]: Accepted publickey for core from 10.200.16.10 port 54974 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:14:27.926000 sshd[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:27.930715 systemd-logind[1641]: New session 16 of user core. Jan 17 12:14:27.935971 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:14:28.301512 sshd[4645]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:28.305154 systemd[1]: sshd@13-10.200.20.31:22-10.200.16.10:54974.service: Deactivated successfully. Jan 17 12:14:28.307764 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:14:28.308764 systemd-logind[1641]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:14:28.309721 systemd-logind[1641]: Removed session 16. Jan 17 12:14:33.390108 systemd[1]: Started sshd@14-10.200.20.31:22-10.200.16.10:54978.service - OpenSSH per-connection server daemon (10.200.16.10:54978). Jan 17 12:14:33.811755 sshd[4659]: Accepted publickey for core from 10.200.16.10 port 54978 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:14:33.813596 sshd[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:33.821365 systemd-logind[1641]: New session 17 of user core. Jan 17 12:14:33.831967 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:14:34.191957 sshd[4659]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:34.195682 systemd[1]: sshd@14-10.200.20.31:22-10.200.16.10:54978.service: Deactivated successfully. Jan 17 12:14:34.197263 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:14:34.198673 systemd-logind[1641]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:14:34.200032 systemd-logind[1641]: Removed session 17. Jan 17 12:14:34.283323 systemd[1]: Started sshd@15-10.200.20.31:22-10.200.16.10:54992.service - OpenSSH per-connection server daemon (10.200.16.10:54992). Jan 17 12:14:34.686233 sshd[4675]: Accepted publickey for core from 10.200.16.10 port 54992 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:14:34.687595 sshd[4675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:34.692342 systemd-logind[1641]: New session 18 of user core. Jan 17 12:14:34.700035 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:14:35.085783 sshd[4675]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:35.089699 systemd-logind[1641]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:14:35.089702 systemd[1]: sshd@15-10.200.20.31:22-10.200.16.10:54992.service: Deactivated successfully. Jan 17 12:14:35.091319 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:14:35.093569 systemd-logind[1641]: Removed session 18. Jan 17 12:14:35.160751 systemd[1]: Started sshd@16-10.200.20.31:22-10.200.16.10:55002.service - OpenSSH per-connection server daemon (10.200.16.10:55002). Jan 17 12:14:35.569068 sshd[4686]: Accepted publickey for core from 10.200.16.10 port 55002 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:14:35.570389 sshd[4686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:35.574999 systemd-logind[1641]: New session 19 of user core. Jan 17 12:14:35.579976 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:14:37.366068 sshd[4686]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:37.369871 systemd-logind[1641]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:14:37.370573 systemd[1]: sshd@16-10.200.20.31:22-10.200.16.10:55002.service: Deactivated successfully. Jan 17 12:14:37.373414 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:14:37.375920 systemd-logind[1641]: Removed session 19. Jan 17 12:14:37.440647 systemd[1]: Started sshd@17-10.200.20.31:22-10.200.16.10:56880.service - OpenSSH per-connection server daemon (10.200.16.10:56880). Jan 17 12:14:37.849277 sshd[4705]: Accepted publickey for core from 10.200.16.10 port 56880 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:14:37.850838 sshd[4705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:37.855075 systemd-logind[1641]: New session 20 of user core. Jan 17 12:14:37.858976 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:14:38.346062 sshd[4705]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:38.349270 systemd-logind[1641]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:14:38.349460 systemd[1]: sshd@17-10.200.20.31:22-10.200.16.10:56880.service: Deactivated successfully. Jan 17 12:14:38.351619 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:14:38.353763 systemd-logind[1641]: Removed session 20. Jan 17 12:14:38.424176 systemd[1]: Started sshd@18-10.200.20.31:22-10.200.16.10:56888.service - OpenSSH per-connection server daemon (10.200.16.10:56888). Jan 17 12:14:38.859087 sshd[4716]: Accepted publickey for core from 10.200.16.10 port 56888 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:14:38.860363 sshd[4716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:38.864497 systemd-logind[1641]: New session 21 of user core. Jan 17 12:14:38.872004 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:14:39.236917 sshd[4716]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:39.240906 systemd[1]: sshd@18-10.200.20.31:22-10.200.16.10:56888.service: Deactivated successfully. Jan 17 12:14:39.243082 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:14:39.244119 systemd-logind[1641]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:14:39.245210 systemd-logind[1641]: Removed session 21. Jan 17 12:14:44.319103 systemd[1]: Started sshd@19-10.200.20.31:22-10.200.16.10:56896.service - OpenSSH per-connection server daemon (10.200.16.10:56896). Jan 17 12:14:44.741024 sshd[4739]: Accepted publickey for core from 10.200.16.10 port 56896 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:14:44.742378 sshd[4739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:44.746290 systemd-logind[1641]: New session 22 of user core. Jan 17 12:14:44.758201 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:14:45.116971 sshd[4739]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:45.120222 systemd[1]: sshd@19-10.200.20.31:22-10.200.16.10:56896.service: Deactivated successfully. Jan 17 12:14:45.121899 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:14:45.123558 systemd-logind[1641]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:14:45.124751 systemd-logind[1641]: Removed session 22. Jan 17 12:14:50.200492 systemd[1]: Started sshd@20-10.200.20.31:22-10.200.16.10:55734.service - OpenSSH per-connection server daemon (10.200.16.10:55734). Jan 17 12:14:50.605955 sshd[4753]: Accepted publickey for core from 10.200.16.10 port 55734 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:14:50.607289 sshd[4753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:50.611415 systemd-logind[1641]: New session 23 of user core. Jan 17 12:14:50.617053 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:14:50.992059 sshd[4753]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:50.996717 systemd[1]: sshd@20-10.200.20.31:22-10.200.16.10:55734.service: Deactivated successfully. Jan 17 12:14:50.999308 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:14:51.000408 systemd-logind[1641]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:14:51.001655 systemd-logind[1641]: Removed session 23. Jan 17 12:14:56.069995 systemd[1]: Started sshd@21-10.200.20.31:22-10.200.16.10:39534.service - OpenSSH per-connection server daemon (10.200.16.10:39534). Jan 17 12:14:56.497409 sshd[4765]: Accepted publickey for core from 10.200.16.10 port 39534 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:14:56.498741 sshd[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:56.503677 systemd-logind[1641]: New session 24 of user core. Jan 17 12:14:56.510150 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:14:56.875041 sshd[4765]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:56.877662 systemd[1]: sshd@21-10.200.20.31:22-10.200.16.10:39534.service: Deactivated successfully. Jan 17 12:14:56.879411 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:14:56.881258 systemd-logind[1641]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:14:56.882356 systemd-logind[1641]: Removed session 24. Jan 17 12:14:56.950098 systemd[1]: Started sshd@22-10.200.20.31:22-10.200.16.10:39538.service - OpenSSH per-connection server daemon (10.200.16.10:39538). Jan 17 12:14:57.357210 sshd[4778]: Accepted publickey for core from 10.200.16.10 port 39538 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:14:57.358533 sshd[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:57.362154 systemd-logind[1641]: New session 25 of user core. Jan 17 12:14:57.367948 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:15:00.160514 containerd[1676]: time="2025-01-17T12:15:00.159949767Z" level=info msg="StopContainer for \"5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508\" with timeout 30 (s)" Jan 17 12:15:00.164180 containerd[1676]: time="2025-01-17T12:15:00.162027650Z" level=info msg="Stop container \"5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508\" with signal terminated" Jan 17 12:15:00.174286 containerd[1676]: time="2025-01-17T12:15:00.174239866Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:15:00.177694 systemd[1]: cri-containerd-5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508.scope: Deactivated successfully. Jan 17 12:15:00.186784 containerd[1676]: time="2025-01-17T12:15:00.186354761Z" level=info msg="StopContainer for \"f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc\" with timeout 2 (s)" Jan 17 12:15:00.187131 containerd[1676]: time="2025-01-17T12:15:00.186979002Z" level=info msg="Stop container \"f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc\" with signal terminated" Jan 17 12:15:00.193347 systemd-networkd[1409]: lxc_health: Link DOWN Jan 17 12:15:00.193354 systemd-networkd[1409]: lxc_health: Lost carrier Jan 17 12:15:00.208965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508-rootfs.mount: Deactivated successfully. Jan 17 12:15:00.212436 systemd[1]: cri-containerd-f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc.scope: Deactivated successfully. Jan 17 12:15:00.213151 systemd[1]: cri-containerd-f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc.scope: Consumed 6.103s CPU time. Jan 17 12:15:00.232002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc-rootfs.mount: Deactivated successfully. Jan 17 12:15:00.304661 containerd[1676]: time="2025-01-17T12:15:00.304599193Z" level=info msg="shim disconnected" id=f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc namespace=k8s.io Jan 17 12:15:00.305109 containerd[1676]: time="2025-01-17T12:15:00.304883873Z" level=warning msg="cleaning up after shim disconnected" id=f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc namespace=k8s.io Jan 17 12:15:00.305109 containerd[1676]: time="2025-01-17T12:15:00.304903313Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:15:00.305407 containerd[1676]: time="2025-01-17T12:15:00.305246194Z" level=info msg="shim disconnected" id=5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508 namespace=k8s.io Jan 17 12:15:00.305407 containerd[1676]: time="2025-01-17T12:15:00.305303754Z" level=warning msg="cleaning up after shim disconnected" id=5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508 namespace=k8s.io Jan 17 12:15:00.305407 containerd[1676]: time="2025-01-17T12:15:00.305311674Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:15:00.325662 containerd[1676]: time="2025-01-17T12:15:00.325611900Z" level=info msg="StopContainer for \"5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508\" returns successfully" Jan 17 12:15:00.326059 containerd[1676]: time="2025-01-17T12:15:00.326029860Z" level=info msg="StopContainer for \"f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc\" returns successfully" Jan 17 12:15:00.326740 containerd[1676]: time="2025-01-17T12:15:00.326712021Z" level=info msg="StopPodSandbox for \"f67d065e8f0f09f4b084ddbabd8f7821196a2e2cf2ed82a8d13efb768398fd82\"" Jan 17 12:15:00.326819 containerd[1676]: time="2025-01-17T12:15:00.326750621Z" level=info msg="Container to stop \"5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:15:00.328583 containerd[1676]: time="2025-01-17T12:15:00.328550264Z" level=info msg="StopPodSandbox for \"e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f\"" Jan 17 12:15:00.328680 containerd[1676]: time="2025-01-17T12:15:00.328590184Z" level=info msg="Container to stop \"cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:15:00.328680 containerd[1676]: time="2025-01-17T12:15:00.328602184Z" level=info msg="Container to stop \"762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:15:00.328680 containerd[1676]: time="2025-01-17T12:15:00.328611984Z" level=info msg="Container to stop \"f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:15:00.328680 containerd[1676]: time="2025-01-17T12:15:00.328621664Z" level=info msg="Container to stop \"63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:15:00.328680 containerd[1676]: time="2025-01-17T12:15:00.328630664Z" level=info msg="Container to stop \"44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:15:00.329073 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f67d065e8f0f09f4b084ddbabd8f7821196a2e2cf2ed82a8d13efb768398fd82-shm.mount: Deactivated successfully. Jan 17 12:15:00.333550 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f-shm.mount: Deactivated successfully. Jan 17 12:15:00.338594 systemd[1]: cri-containerd-e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f.scope: Deactivated successfully. Jan 17 12:15:00.348814 systemd[1]: cri-containerd-f67d065e8f0f09f4b084ddbabd8f7821196a2e2cf2ed82a8d13efb768398fd82.scope: Deactivated successfully. Jan 17 12:15:00.384095 containerd[1676]: time="2025-01-17T12:15:00.383969975Z" level=info msg="shim disconnected" id=e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f namespace=k8s.io Jan 17 12:15:00.384095 containerd[1676]: time="2025-01-17T12:15:00.384051055Z" level=warning msg="cleaning up after shim disconnected" id=e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f namespace=k8s.io Jan 17 12:15:00.384095 containerd[1676]: time="2025-01-17T12:15:00.384059375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:15:00.385911 containerd[1676]: time="2025-01-17T12:15:00.385624817Z" level=info msg="shim disconnected" id=f67d065e8f0f09f4b084ddbabd8f7821196a2e2cf2ed82a8d13efb768398fd82 namespace=k8s.io Jan 17 12:15:00.385911 containerd[1676]: time="2025-01-17T12:15:00.385678257Z" level=warning msg="cleaning up after shim disconnected" id=f67d065e8f0f09f4b084ddbabd8f7821196a2e2cf2ed82a8d13efb768398fd82 namespace=k8s.io Jan 17 12:15:00.385911 containerd[1676]: time="2025-01-17T12:15:00.385701177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:15:00.399764 containerd[1676]: time="2025-01-17T12:15:00.399713115Z" level=info msg="TearDown network for sandbox \"f67d065e8f0f09f4b084ddbabd8f7821196a2e2cf2ed82a8d13efb768398fd82\" successfully" Jan 17 12:15:00.399972 containerd[1676]: time="2025-01-17T12:15:00.399879355Z" level=info msg="StopPodSandbox for \"f67d065e8f0f09f4b084ddbabd8f7821196a2e2cf2ed82a8d13efb768398fd82\" returns successfully" Jan 17 12:15:00.399972 containerd[1676]: time="2025-01-17T12:15:00.399752275Z" level=info msg="TearDown network for sandbox \"e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f\" successfully" Jan 17 12:15:00.399972 containerd[1676]: time="2025-01-17T12:15:00.399947675Z" level=info msg="StopPodSandbox for \"e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f\" returns successfully" Jan 17 12:15:00.538119 kubelet[3189]: I0117 12:15:00.537384 3189 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-hostproc\") pod \"826fd662-2fdf-4d37-9506-5a1edd15681a\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " Jan 17 12:15:00.538119 kubelet[3189]: I0117 12:15:00.537446 3189 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-etc-cni-netd\") pod \"826fd662-2fdf-4d37-9506-5a1edd15681a\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " Jan 17 12:15:00.538119 kubelet[3189]: I0117 12:15:00.537463 3189 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-lib-modules\") pod \"826fd662-2fdf-4d37-9506-5a1edd15681a\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " Jan 17 12:15:00.538119 kubelet[3189]: I0117 12:15:00.537477 3189 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-bpf-maps\") pod \"826fd662-2fdf-4d37-9506-5a1edd15681a\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " Jan 17 12:15:00.538119 kubelet[3189]: I0117 12:15:00.537503 3189 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8rs9\" (UniqueName: \"kubernetes.io/projected/826fd662-2fdf-4d37-9506-5a1edd15681a-kube-api-access-s8rs9\") pod \"826fd662-2fdf-4d37-9506-5a1edd15681a\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " Jan 17 12:15:00.538119 kubelet[3189]: I0117 12:15:00.537524 3189 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/826fd662-2fdf-4d37-9506-5a1edd15681a-clustermesh-secrets\") pod \"826fd662-2fdf-4d37-9506-5a1edd15681a\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " Jan 17 12:15:00.538605 kubelet[3189]: I0117 12:15:00.537521 3189 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-hostproc" (OuterVolumeSpecName: "hostproc") pod "826fd662-2fdf-4d37-9506-5a1edd15681a" (UID: "826fd662-2fdf-4d37-9506-5a1edd15681a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:15:00.538605 kubelet[3189]: I0117 12:15:00.537540 3189 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/826fd662-2fdf-4d37-9506-5a1edd15681a-hubble-tls\") pod \"826fd662-2fdf-4d37-9506-5a1edd15681a\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " Jan 17 12:15:00.538605 kubelet[3189]: I0117 12:15:00.537559 3189 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-cilium-run\") pod \"826fd662-2fdf-4d37-9506-5a1edd15681a\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " Jan 17 12:15:00.538605 kubelet[3189]: I0117 12:15:00.537573 3189 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "826fd662-2fdf-4d37-9506-5a1edd15681a" (UID: "826fd662-2fdf-4d37-9506-5a1edd15681a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:15:00.538605 kubelet[3189]: I0117 12:15:00.537576 3189 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/091f2026-32e8-4cdc-9688-ec6cc9423060-cilium-config-path\") pod \"091f2026-32e8-4cdc-9688-ec6cc9423060\" (UID: \"091f2026-32e8-4cdc-9688-ec6cc9423060\") " Jan 17 12:15:00.538605 kubelet[3189]: I0117 12:15:00.537613 3189 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-host-proc-sys-net\") pod \"826fd662-2fdf-4d37-9506-5a1edd15681a\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " Jan 17 12:15:00.538737 kubelet[3189]: I0117 12:15:00.537631 3189 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-host-proc-sys-kernel\") pod \"826fd662-2fdf-4d37-9506-5a1edd15681a\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " Jan 17 12:15:00.538737 kubelet[3189]: I0117 12:15:00.537647 3189 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-cni-path\") pod \"826fd662-2fdf-4d37-9506-5a1edd15681a\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " Jan 17 12:15:00.538737 kubelet[3189]: I0117 12:15:00.537665 3189 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-xtables-lock\") pod \"826fd662-2fdf-4d37-9506-5a1edd15681a\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " Jan 17 12:15:00.538737 kubelet[3189]: I0117 12:15:00.537685 3189 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/826fd662-2fdf-4d37-9506-5a1edd15681a-cilium-config-path\") pod \"826fd662-2fdf-4d37-9506-5a1edd15681a\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " Jan 17 12:15:00.538737 kubelet[3189]: I0117 12:15:00.537700 3189 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-cilium-cgroup\") pod \"826fd662-2fdf-4d37-9506-5a1edd15681a\" (UID: \"826fd662-2fdf-4d37-9506-5a1edd15681a\") " Jan 17 12:15:00.538737 kubelet[3189]: I0117 12:15:00.537719 3189 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gqzz\" (UniqueName: \"kubernetes.io/projected/091f2026-32e8-4cdc-9688-ec6cc9423060-kube-api-access-9gqzz\") pod \"091f2026-32e8-4cdc-9688-ec6cc9423060\" (UID: \"091f2026-32e8-4cdc-9688-ec6cc9423060\") " Jan 17 12:15:00.538883 kubelet[3189]: I0117 12:15:00.537757 3189 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-hostproc\") on node \"ci-4081.3.0-a-4140a712f6\" DevicePath \"\"" Jan 17 12:15:00.538883 kubelet[3189]: I0117 12:15:00.537767 3189 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-bpf-maps\") on node \"ci-4081.3.0-a-4140a712f6\" DevicePath \"\"" Jan 17 12:15:00.540520 kubelet[3189]: I0117 12:15:00.539377 3189 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/091f2026-32e8-4cdc-9688-ec6cc9423060-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "091f2026-32e8-4cdc-9688-ec6cc9423060" (UID: "091f2026-32e8-4cdc-9688-ec6cc9423060"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:15:00.542924 kubelet[3189]: I0117 12:15:00.542882 3189 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "826fd662-2fdf-4d37-9506-5a1edd15681a" (UID: "826fd662-2fdf-4d37-9506-5a1edd15681a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:15:00.543014 kubelet[3189]: I0117 12:15:00.542936 3189 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "826fd662-2fdf-4d37-9506-5a1edd15681a" (UID: "826fd662-2fdf-4d37-9506-5a1edd15681a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:15:00.543014 kubelet[3189]: I0117 12:15:00.542954 3189 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "826fd662-2fdf-4d37-9506-5a1edd15681a" (UID: "826fd662-2fdf-4d37-9506-5a1edd15681a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:15:00.543014 kubelet[3189]: I0117 12:15:00.542969 3189 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "826fd662-2fdf-4d37-9506-5a1edd15681a" (UID: "826fd662-2fdf-4d37-9506-5a1edd15681a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:15:00.543014 kubelet[3189]: I0117 12:15:00.542984 3189 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-cni-path" (OuterVolumeSpecName: "cni-path") pod "826fd662-2fdf-4d37-9506-5a1edd15681a" (UID: "826fd662-2fdf-4d37-9506-5a1edd15681a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:15:00.543014 kubelet[3189]: I0117 12:15:00.543000 3189 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "826fd662-2fdf-4d37-9506-5a1edd15681a" (UID: "826fd662-2fdf-4d37-9506-5a1edd15681a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:15:00.545234 kubelet[3189]: I0117 12:15:00.544908 3189 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "826fd662-2fdf-4d37-9506-5a1edd15681a" (UID: "826fd662-2fdf-4d37-9506-5a1edd15681a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:15:00.545234 kubelet[3189]: I0117 12:15:00.545100 3189 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/091f2026-32e8-4cdc-9688-ec6cc9423060-kube-api-access-9gqzz" (OuterVolumeSpecName: "kube-api-access-9gqzz") pod "091f2026-32e8-4cdc-9688-ec6cc9423060" (UID: "091f2026-32e8-4cdc-9688-ec6cc9423060"). InnerVolumeSpecName "kube-api-access-9gqzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:15:00.545234 kubelet[3189]: I0117 12:15:00.545191 3189 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826fd662-2fdf-4d37-9506-5a1edd15681a-kube-api-access-s8rs9" (OuterVolumeSpecName: "kube-api-access-s8rs9") pod "826fd662-2fdf-4d37-9506-5a1edd15681a" (UID: "826fd662-2fdf-4d37-9506-5a1edd15681a"). InnerVolumeSpecName "kube-api-access-s8rs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:15:00.545234 kubelet[3189]: I0117 12:15:00.545215 3189 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "826fd662-2fdf-4d37-9506-5a1edd15681a" (UID: "826fd662-2fdf-4d37-9506-5a1edd15681a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:15:00.545790 kubelet[3189]: I0117 12:15:00.545750 3189 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/826fd662-2fdf-4d37-9506-5a1edd15681a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "826fd662-2fdf-4d37-9506-5a1edd15681a" (UID: "826fd662-2fdf-4d37-9506-5a1edd15681a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:15:00.546142 kubelet[3189]: I0117 12:15:00.546052 3189 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826fd662-2fdf-4d37-9506-5a1edd15681a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "826fd662-2fdf-4d37-9506-5a1edd15681a" (UID: "826fd662-2fdf-4d37-9506-5a1edd15681a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:15:00.546142 kubelet[3189]: I0117 12:15:00.546092 3189 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826fd662-2fdf-4d37-9506-5a1edd15681a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "826fd662-2fdf-4d37-9506-5a1edd15681a" (UID: "826fd662-2fdf-4d37-9506-5a1edd15681a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:15:00.638823 kubelet[3189]: I0117 12:15:00.638640 3189 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-s8rs9\" (UniqueName: \"kubernetes.io/projected/826fd662-2fdf-4d37-9506-5a1edd15681a-kube-api-access-s8rs9\") on node \"ci-4081.3.0-a-4140a712f6\" DevicePath \"\"" Jan 17 12:15:00.638823 kubelet[3189]: I0117 12:15:00.638675 3189 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/826fd662-2fdf-4d37-9506-5a1edd15681a-clustermesh-secrets\") on node \"ci-4081.3.0-a-4140a712f6\" DevicePath \"\"" Jan 17 12:15:00.638823 kubelet[3189]: I0117 12:15:00.638686 3189 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/826fd662-2fdf-4d37-9506-5a1edd15681a-hubble-tls\") on node \"ci-4081.3.0-a-4140a712f6\" DevicePath \"\"" Jan 17 12:15:00.638823 kubelet[3189]: I0117 12:15:00.638695 3189 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-cilium-run\") on node \"ci-4081.3.0-a-4140a712f6\" DevicePath \"\"" Jan 17 12:15:00.638823 kubelet[3189]: I0117 12:15:00.638703 3189 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/091f2026-32e8-4cdc-9688-ec6cc9423060-cilium-config-path\") on node \"ci-4081.3.0-a-4140a712f6\" DevicePath \"\"" Jan 17 12:15:00.638823 kubelet[3189]: I0117 12:15:00.638712 3189 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-host-proc-sys-net\") on node \"ci-4081.3.0-a-4140a712f6\" DevicePath \"\"" Jan 17 12:15:00.638823 kubelet[3189]: I0117 12:15:00.638722 3189 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-host-proc-sys-kernel\") on node \"ci-4081.3.0-a-4140a712f6\" DevicePath \"\"" Jan 17 12:15:00.638823 kubelet[3189]: I0117 12:15:00.638731 3189 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-xtables-lock\") on node \"ci-4081.3.0-a-4140a712f6\" DevicePath \"\"" Jan 17 12:15:00.639107 kubelet[3189]: I0117 12:15:00.638739 3189 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/826fd662-2fdf-4d37-9506-5a1edd15681a-cilium-config-path\") on node \"ci-4081.3.0-a-4140a712f6\" DevicePath \"\"" Jan 17 12:15:00.639107 kubelet[3189]: I0117 12:15:00.638746 3189 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-cilium-cgroup\") on node \"ci-4081.3.0-a-4140a712f6\" DevicePath \"\"" Jan 17 12:15:00.639107 kubelet[3189]: I0117 12:15:00.638756 3189 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-cni-path\") on node \"ci-4081.3.0-a-4140a712f6\" DevicePath \"\"" Jan 17 12:15:00.639107 kubelet[3189]: I0117 12:15:00.638763 3189 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9gqzz\" (UniqueName: \"kubernetes.io/projected/091f2026-32e8-4cdc-9688-ec6cc9423060-kube-api-access-9gqzz\") on node \"ci-4081.3.0-a-4140a712f6\" DevicePath \"\"" Jan 17 12:15:00.639107 kubelet[3189]: I0117 12:15:00.638771 3189 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-etc-cni-netd\") on node \"ci-4081.3.0-a-4140a712f6\" DevicePath \"\"" Jan 17 12:15:00.639107 kubelet[3189]: I0117 12:15:00.638779 3189 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/826fd662-2fdf-4d37-9506-5a1edd15681a-lib-modules\") on node \"ci-4081.3.0-a-4140a712f6\" DevicePath \"\"" Jan 17 12:15:01.011839 kubelet[3189]: I0117 12:15:01.011814 3189 scope.go:117] "RemoveContainer" containerID="5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508" Jan 17 12:15:01.014596 containerd[1676]: time="2025-01-17T12:15:01.014305824Z" level=info msg="RemoveContainer for \"5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508\"" Jan 17 12:15:01.020828 systemd[1]: Removed slice kubepods-besteffort-pod091f2026_32e8_4cdc_9688_ec6cc9423060.slice - libcontainer container kubepods-besteffort-pod091f2026_32e8_4cdc_9688_ec6cc9423060.slice. Jan 17 12:15:01.031047 containerd[1676]: time="2025-01-17T12:15:01.029354203Z" level=info msg="RemoveContainer for \"5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508\" returns successfully" Jan 17 12:15:01.029517 systemd[1]: Removed slice kubepods-burstable-pod826fd662_2fdf_4d37_9506_5a1edd15681a.slice - libcontainer container kubepods-burstable-pod826fd662_2fdf_4d37_9506_5a1edd15681a.slice. Jan 17 12:15:01.029627 systemd[1]: kubepods-burstable-pod826fd662_2fdf_4d37_9506_5a1edd15681a.slice: Consumed 6.163s CPU time. Jan 17 12:15:01.032143 kubelet[3189]: I0117 12:15:01.031758 3189 scope.go:117] "RemoveContainer" containerID="5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508" Jan 17 12:15:01.032359 containerd[1676]: time="2025-01-17T12:15:01.032068207Z" level=error msg="ContainerStatus for \"5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508\": not found" Jan 17 12:15:01.032542 kubelet[3189]: E0117 12:15:01.032505 3189 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508\": not found" containerID="5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508" Jan 17 12:15:01.032638 kubelet[3189]: I0117 12:15:01.032548 3189 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508"} err="failed to get container status \"5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508\": rpc error: code = NotFound desc = an error occurred when try to find container \"5482c83a837d14487607ce61acf7b9122ed912697c017569d7f2cbb6c2a5f508\": not found" Jan 17 12:15:01.032638 kubelet[3189]: I0117 12:15:01.032637 3189 scope.go:117] "RemoveContainer" containerID="f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc" Jan 17 12:15:01.034232 containerd[1676]: time="2025-01-17T12:15:01.034198210Z" level=info msg="RemoveContainer for \"f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc\"" Jan 17 12:15:01.041506 containerd[1676]: time="2025-01-17T12:15:01.041149859Z" level=info msg="RemoveContainer for \"f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc\" returns successfully" Jan 17 12:15:01.042197 kubelet[3189]: I0117 12:15:01.042172 3189 scope.go:117] "RemoveContainer" containerID="762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b" Jan 17 12:15:01.044221 containerd[1676]: time="2025-01-17T12:15:01.043951182Z" level=info msg="RemoveContainer for \"762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b\"" Jan 17 12:15:01.051184 containerd[1676]: time="2025-01-17T12:15:01.051145911Z" level=info msg="RemoveContainer for \"762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b\" returns successfully" Jan 17 12:15:01.051606 kubelet[3189]: I0117 12:15:01.051580 3189 scope.go:117] "RemoveContainer" containerID="44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1" Jan 17 12:15:01.053829 containerd[1676]: time="2025-01-17T12:15:01.053757755Z" level=info msg="RemoveContainer for \"44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1\"" Jan 17 12:15:01.062945 containerd[1676]: time="2025-01-17T12:15:01.062890687Z" level=info msg="RemoveContainer for \"44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1\" returns successfully" Jan 17 12:15:01.063365 kubelet[3189]: I0117 12:15:01.063320 3189 scope.go:117] "RemoveContainer" containerID="63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c" Jan 17 12:15:01.065088 containerd[1676]: time="2025-01-17T12:15:01.064948609Z" level=info msg="RemoveContainer for \"63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c\"" Jan 17 12:15:01.074412 containerd[1676]: time="2025-01-17T12:15:01.074251381Z" level=info msg="RemoveContainer for \"63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c\" returns successfully" Jan 17 12:15:01.074704 kubelet[3189]: I0117 12:15:01.074534 3189 scope.go:117] "RemoveContainer" containerID="cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd" Jan 17 12:15:01.076947 containerd[1676]: time="2025-01-17T12:15:01.076879424Z" level=info msg="RemoveContainer for \"cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd\"" Jan 17 12:15:01.083202 containerd[1676]: time="2025-01-17T12:15:01.083160673Z" level=info msg="RemoveContainer for \"cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd\" returns successfully" Jan 17 12:15:01.083858 kubelet[3189]: I0117 12:15:01.083700 3189 scope.go:117] "RemoveContainer" containerID="f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc" Jan 17 12:15:01.084161 containerd[1676]: time="2025-01-17T12:15:01.084120114Z" level=error msg="ContainerStatus for \"f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc\": not found" Jan 17 12:15:01.084469 kubelet[3189]: E0117 12:15:01.084351 3189 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc\": not found" containerID="f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc" Jan 17 12:15:01.084469 kubelet[3189]: I0117 12:15:01.084401 3189 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc"} err="failed to get container status \"f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"f85204d75e7a674b406aee036d33d9a3dbdf27ea2bb53a172b873785bb6de2cc\": not found" Jan 17 12:15:01.084469 kubelet[3189]: I0117 12:15:01.084428 3189 scope.go:117] "RemoveContainer" containerID="762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b" Jan 17 12:15:01.085444 containerd[1676]: time="2025-01-17T12:15:01.085375995Z" level=error msg="ContainerStatus for \"762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b\": not found" Jan 17 12:15:01.085915 kubelet[3189]: E0117 12:15:01.085601 3189 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b\": not found" containerID="762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b" Jan 17 12:15:01.085915 kubelet[3189]: I0117 12:15:01.085634 3189 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b"} err="failed to get container status \"762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b\": rpc error: code = NotFound desc = an error occurred when try to find container \"762dd1356aa76fe63d2a980aa4a788d2efb4c5e57a0767824f8536bc4ed0521b\": not found" Jan 17 12:15:01.085915 kubelet[3189]: I0117 12:15:01.085653 3189 scope.go:117] "RemoveContainer" containerID="44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1" Jan 17 12:15:01.086041 containerd[1676]: time="2025-01-17T12:15:01.085852356Z" level=error msg="ContainerStatus for \"44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1\": not found" Jan 17 12:15:01.086125 kubelet[3189]: E0117 12:15:01.086086 3189 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1\": not found" containerID="44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1" Jan 17 12:15:01.086125 kubelet[3189]: I0117 12:15:01.086112 3189 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1"} err="failed to get container status \"44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1\": rpc error: code = NotFound desc = an error occurred when try to find container \"44c3cf3bc885c05713ccad2be3b8a10d87fd1cef4b5d1b8afde68903b48ffba1\": not found" Jan 17 12:15:01.086222 kubelet[3189]: I0117 12:15:01.086128 3189 scope.go:117] "RemoveContainer" containerID="63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c" Jan 17 12:15:01.086477 containerd[1676]: time="2025-01-17T12:15:01.086437237Z" level=error msg="ContainerStatus for \"63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c\": not found" Jan 17 12:15:01.086736 kubelet[3189]: E0117 12:15:01.086620 3189 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c\": not found" containerID="63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c" Jan 17 12:15:01.086736 kubelet[3189]: I0117 12:15:01.086646 3189 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c"} err="failed to get container status \"63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c\": rpc error: code = NotFound desc = an error occurred when try to find container \"63b42311870c02e6f70c5fd048050800121e2de3f3a73a0cc4cd8811433f8e8c\": not found" Jan 17 12:15:01.086736 kubelet[3189]: I0117 12:15:01.086662 3189 scope.go:117] "RemoveContainer" containerID="cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd" Jan 17 12:15:01.087274 containerd[1676]: time="2025-01-17T12:15:01.087076198Z" level=error msg="ContainerStatus for \"cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd\": not found" Jan 17 12:15:01.087427 kubelet[3189]: E0117 12:15:01.087400 3189 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd\": not found" containerID="cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd" Jan 17 12:15:01.087546 kubelet[3189]: I0117 12:15:01.087527 3189 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd"} err="failed to get container status \"cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd9e3896f37ff52da370772cc6eaf74df343a639eec35ccae118abc430bd26cd\": not found" Jan 17 12:15:01.154777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f67d065e8f0f09f4b084ddbabd8f7821196a2e2cf2ed82a8d13efb768398fd82-rootfs.mount: Deactivated successfully. Jan 17 12:15:01.154910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3c739d37cc6ff236bc9cd3a07fa48a48a3572fb26876635daf598a697f5702f-rootfs.mount: Deactivated successfully. Jan 17 12:15:01.154967 systemd[1]: var-lib-kubelet-pods-091f2026\x2d32e8\x2d4cdc\x2d9688\x2dec6cc9423060-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9gqzz.mount: Deactivated successfully. Jan 17 12:15:01.155032 systemd[1]: var-lib-kubelet-pods-826fd662\x2d2fdf\x2d4d37\x2d9506\x2d5a1edd15681a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds8rs9.mount: Deactivated successfully. Jan 17 12:15:01.155091 systemd[1]: var-lib-kubelet-pods-826fd662\x2d2fdf\x2d4d37\x2d9506\x2d5a1edd15681a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 12:15:01.155147 systemd[1]: var-lib-kubelet-pods-826fd662\x2d2fdf\x2d4d37\x2d9506\x2d5a1edd15681a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 12:15:01.582952 kubelet[3189]: I0117 12:15:01.582561 3189 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="091f2026-32e8-4cdc-9688-ec6cc9423060" path="/var/lib/kubelet/pods/091f2026-32e8-4cdc-9688-ec6cc9423060/volumes" Jan 17 12:15:01.583312 kubelet[3189]: I0117 12:15:01.582988 3189 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="826fd662-2fdf-4d37-9506-5a1edd15681a" path="/var/lib/kubelet/pods/826fd662-2fdf-4d37-9506-5a1edd15681a/volumes" Jan 17 12:15:02.165051 sshd[4778]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:02.167702 systemd-logind[1641]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:15:02.168038 systemd[1]: sshd@22-10.200.20.31:22-10.200.16.10:39538.service: Deactivated successfully. Jan 17 12:15:02.169643 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:15:02.170392 systemd[1]: session-25.scope: Consumed 1.926s CPU time. Jan 17 12:15:02.172355 systemd-logind[1641]: Removed session 25. Jan 17 12:15:02.249112 systemd[1]: Started sshd@23-10.200.20.31:22-10.200.16.10:39552.service - OpenSSH per-connection server daemon (10.200.16.10:39552). Jan 17 12:15:02.670435 sshd[4936]: Accepted publickey for core from 10.200.16.10 port 39552 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:15:02.671892 sshd[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:02.675737 systemd-logind[1641]: New session 26 of user core. Jan 17 12:15:02.685884 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:15:04.476573 kubelet[3189]: E0117 12:15:04.476521 3189 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="091f2026-32e8-4cdc-9688-ec6cc9423060" containerName="cilium-operator" Jan 17 12:15:04.476573 kubelet[3189]: E0117 12:15:04.476561 3189 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="826fd662-2fdf-4d37-9506-5a1edd15681a" containerName="mount-cgroup" Jan 17 12:15:04.476573 kubelet[3189]: E0117 12:15:04.476569 3189 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="826fd662-2fdf-4d37-9506-5a1edd15681a" containerName="apply-sysctl-overwrites" Jan 17 12:15:04.476573 kubelet[3189]: E0117 12:15:04.476575 3189 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="826fd662-2fdf-4d37-9506-5a1edd15681a" containerName="clean-cilium-state" Jan 17 12:15:04.476573 kubelet[3189]: E0117 12:15:04.476582 3189 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="826fd662-2fdf-4d37-9506-5a1edd15681a" containerName="mount-bpf-fs" Jan 17 12:15:04.476573 kubelet[3189]: E0117 12:15:04.476588 3189 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="826fd662-2fdf-4d37-9506-5a1edd15681a" containerName="cilium-agent" Jan 17 12:15:04.477123 kubelet[3189]: I0117 12:15:04.476613 3189 memory_manager.go:354] "RemoveStaleState removing state" podUID="826fd662-2fdf-4d37-9506-5a1edd15681a" containerName="cilium-agent" Jan 17 12:15:04.477123 kubelet[3189]: I0117 12:15:04.476619 3189 memory_manager.go:354] "RemoveStaleState removing state" podUID="091f2026-32e8-4cdc-9688-ec6cc9423060" containerName="cilium-operator" Jan 17 12:15:04.489445 systemd[1]: Created slice kubepods-burstable-pod31fc0071_a1e0_4a8d_a2bf_1fe56c0bab5e.slice - libcontainer container kubepods-burstable-pod31fc0071_a1e0_4a8d_a2bf_1fe56c0bab5e.slice. Jan 17 12:15:04.526824 sshd[4936]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:04.530247 systemd[1]: sshd@23-10.200.20.31:22-10.200.16.10:39552.service: Deactivated successfully. Jan 17 12:15:04.530889 systemd-logind[1641]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:15:04.534326 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:15:04.534837 systemd[1]: session-26.scope: Consumed 1.453s CPU time. Jan 17 12:15:04.538692 systemd-logind[1641]: Removed session 26. Jan 17 12:15:04.603694 systemd[1]: Started sshd@24-10.200.20.31:22-10.200.16.10:39554.service - OpenSSH per-connection server daemon (10.200.16.10:39554). Jan 17 12:15:04.663287 kubelet[3189]: I0117 12:15:04.663181 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e-host-proc-sys-net\") pod \"cilium-nxsfv\" (UID: \"31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e\") " pod="kube-system/cilium-nxsfv" Jan 17 12:15:04.663287 kubelet[3189]: I0117 12:15:04.663227 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdrfl\" (UniqueName: \"kubernetes.io/projected/31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e-kube-api-access-sdrfl\") pod \"cilium-nxsfv\" (UID: \"31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e\") " pod="kube-system/cilium-nxsfv" Jan 17 12:15:04.663287 kubelet[3189]: I0117 12:15:04.663248 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e-cilium-cgroup\") pod \"cilium-nxsfv\" (UID: \"31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e\") " pod="kube-system/cilium-nxsfv" Jan 17 12:15:04.663287 kubelet[3189]: I0117 12:15:04.663267 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e-cilium-config-path\") pod \"cilium-nxsfv\" (UID: \"31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e\") " pod="kube-system/cilium-nxsfv" Jan 17 12:15:04.663287 kubelet[3189]: I0117 12:15:04.663283 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e-cilium-run\") pod \"cilium-nxsfv\" (UID: \"31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e\") " pod="kube-system/cilium-nxsfv" Jan 17 12:15:04.663287 kubelet[3189]: I0117 12:15:04.663300 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e-hostproc\") pod \"cilium-nxsfv\" (UID: \"31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e\") " pod="kube-system/cilium-nxsfv" Jan 17 12:15:04.663598 kubelet[3189]: I0117 12:15:04.663327 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e-hubble-tls\") pod \"cilium-nxsfv\" (UID: \"31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e\") " pod="kube-system/cilium-nxsfv" Jan 17 12:15:04.663598 kubelet[3189]: I0117 12:15:04.663346 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e-lib-modules\") pod \"cilium-nxsfv\" (UID: \"31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e\") " pod="kube-system/cilium-nxsfv" Jan 17 12:15:04.663598 kubelet[3189]: I0117 12:15:04.663364 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e-cilium-ipsec-secrets\") pod \"cilium-nxsfv\" (UID: \"31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e\") " pod="kube-system/cilium-nxsfv" Jan 17 12:15:04.663598 kubelet[3189]: I0117 12:15:04.663381 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e-bpf-maps\") pod \"cilium-nxsfv\" (UID: \"31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e\") " pod="kube-system/cilium-nxsfv" Jan 17 12:15:04.663598 kubelet[3189]: I0117 12:15:04.663395 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e-cni-path\") pod \"cilium-nxsfv\" (UID: \"31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e\") " pod="kube-system/cilium-nxsfv" Jan 17 12:15:04.663598 kubelet[3189]: I0117 12:15:04.663416 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e-clustermesh-secrets\") pod \"cilium-nxsfv\" (UID: \"31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e\") " pod="kube-system/cilium-nxsfv" Jan 17 12:15:04.663725 kubelet[3189]: I0117 12:15:04.663431 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e-host-proc-sys-kernel\") pod \"cilium-nxsfv\" (UID: \"31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e\") " pod="kube-system/cilium-nxsfv" Jan 17 12:15:04.663725 kubelet[3189]: I0117 12:15:04.663446 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e-etc-cni-netd\") pod \"cilium-nxsfv\" (UID: \"31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e\") " pod="kube-system/cilium-nxsfv" Jan 17 12:15:04.663725 kubelet[3189]: I0117 12:15:04.663463 3189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e-xtables-lock\") pod \"cilium-nxsfv\" (UID: \"31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e\") " pod="kube-system/cilium-nxsfv" Jan 17 12:15:04.698187 kubelet[3189]: E0117 12:15:04.698098 3189 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:15:04.798502 containerd[1676]: time="2025-01-17T12:15:04.798448000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nxsfv,Uid:31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e,Namespace:kube-system,Attempt:0,}" Jan 17 12:15:04.848232 containerd[1676]: time="2025-01-17T12:15:04.848107664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:15:04.848232 containerd[1676]: time="2025-01-17T12:15:04.848176544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:15:04.848232 containerd[1676]: time="2025-01-17T12:15:04.848201264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:15:04.848960 containerd[1676]: time="2025-01-17T12:15:04.848289704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:15:04.868070 systemd[1]: Started cri-containerd-2b730aad4c47431efdafde5ddaa82fb658ab874b26efcb9efbc5217e9e9ca36d.scope - libcontainer container 2b730aad4c47431efdafde5ddaa82fb658ab874b26efcb9efbc5217e9e9ca36d. Jan 17 12:15:04.891714 containerd[1676]: time="2025-01-17T12:15:04.891624680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nxsfv,Uid:31fc0071-a1e0-4a8d-a2bf-1fe56c0bab5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b730aad4c47431efdafde5ddaa82fb658ab874b26efcb9efbc5217e9e9ca36d\"" Jan 17 12:15:04.895000 containerd[1676]: time="2025-01-17T12:15:04.894833884Z" level=info msg="CreateContainer within sandbox \"2b730aad4c47431efdafde5ddaa82fb658ab874b26efcb9efbc5217e9e9ca36d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:15:04.936417 containerd[1676]: time="2025-01-17T12:15:04.936362457Z" level=info msg="CreateContainer within sandbox \"2b730aad4c47431efdafde5ddaa82fb658ab874b26efcb9efbc5217e9e9ca36d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"73d9e5dbb712139955e9e23dcc13fc2c7af90f70384fdd586b70979a6fc88476\"" Jan 17 12:15:04.938020 containerd[1676]: time="2025-01-17T12:15:04.937151778Z" level=info msg="StartContainer for \"73d9e5dbb712139955e9e23dcc13fc2c7af90f70384fdd586b70979a6fc88476\"" Jan 17 12:15:04.963028 systemd[1]: Started cri-containerd-73d9e5dbb712139955e9e23dcc13fc2c7af90f70384fdd586b70979a6fc88476.scope - libcontainer container 73d9e5dbb712139955e9e23dcc13fc2c7af90f70384fdd586b70979a6fc88476. Jan 17 12:15:04.992361 containerd[1676]: time="2025-01-17T12:15:04.992277089Z" level=info msg="StartContainer for \"73d9e5dbb712139955e9e23dcc13fc2c7af90f70384fdd586b70979a6fc88476\" returns successfully" Jan 17 12:15:04.997417 systemd[1]: cri-containerd-73d9e5dbb712139955e9e23dcc13fc2c7af90f70384fdd586b70979a6fc88476.scope: Deactivated successfully. Jan 17 12:15:05.032357 sshd[4948]: Accepted publickey for core from 10.200.16.10 port 39554 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:15:05.035117 sshd[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:05.041058 systemd-logind[1641]: New session 27 of user core. Jan 17 12:15:05.049001 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 12:15:05.110760 containerd[1676]: time="2025-01-17T12:15:05.110668641Z" level=info msg="shim disconnected" id=73d9e5dbb712139955e9e23dcc13fc2c7af90f70384fdd586b70979a6fc88476 namespace=k8s.io Jan 17 12:15:05.110760 containerd[1676]: time="2025-01-17T12:15:05.110754361Z" level=warning msg="cleaning up after shim disconnected" id=73d9e5dbb712139955e9e23dcc13fc2c7af90f70384fdd586b70979a6fc88476 namespace=k8s.io Jan 17 12:15:05.110760 containerd[1676]: time="2025-01-17T12:15:05.110764601Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:15:05.350065 sshd[4948]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:05.354577 systemd[1]: sshd@24-10.200.20.31:22-10.200.16.10:39554.service: Deactivated successfully. Jan 17 12:15:05.357074 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 12:15:05.358176 systemd-logind[1641]: Session 27 logged out. Waiting for processes to exit. Jan 17 12:15:05.359237 systemd-logind[1641]: Removed session 27. Jan 17 12:15:05.430916 systemd[1]: Started sshd@25-10.200.20.31:22-10.200.16.10:39566.service - OpenSSH per-connection server daemon (10.200.16.10:39566). Jan 17 12:15:05.857504 sshd[5062]: Accepted publickey for core from 10.200.16.10 port 39566 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:15:05.858935 sshd[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:05.863324 systemd-logind[1641]: New session 28 of user core. Jan 17 12:15:05.870041 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 12:15:06.037650 containerd[1676]: time="2025-01-17T12:15:06.037540469Z" level=info msg="CreateContainer within sandbox \"2b730aad4c47431efdafde5ddaa82fb658ab874b26efcb9efbc5217e9e9ca36d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:15:06.076622 containerd[1676]: time="2025-01-17T12:15:06.076572319Z" level=info msg="CreateContainer within sandbox \"2b730aad4c47431efdafde5ddaa82fb658ab874b26efcb9efbc5217e9e9ca36d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"180450550fed28419bd2aaf8d0cc55737ad0969b45d2ef39cd775c9953ccca68\"" Jan 17 12:15:06.077881 containerd[1676]: time="2025-01-17T12:15:06.077835201Z" level=info msg="StartContainer for \"180450550fed28419bd2aaf8d0cc55737ad0969b45d2ef39cd775c9953ccca68\"" Jan 17 12:15:06.112121 systemd[1]: Started cri-containerd-180450550fed28419bd2aaf8d0cc55737ad0969b45d2ef39cd775c9953ccca68.scope - libcontainer container 180450550fed28419bd2aaf8d0cc55737ad0969b45d2ef39cd775c9953ccca68. Jan 17 12:15:06.147451 containerd[1676]: time="2025-01-17T12:15:06.147390810Z" level=info msg="StartContainer for \"180450550fed28419bd2aaf8d0cc55737ad0969b45d2ef39cd775c9953ccca68\" returns successfully" Jan 17 12:15:06.154377 systemd[1]: cri-containerd-180450550fed28419bd2aaf8d0cc55737ad0969b45d2ef39cd775c9953ccca68.scope: Deactivated successfully. Jan 17 12:15:06.209826 containerd[1676]: time="2025-01-17T12:15:06.209716730Z" level=info msg="shim disconnected" id=180450550fed28419bd2aaf8d0cc55737ad0969b45d2ef39cd775c9953ccca68 namespace=k8s.io Jan 17 12:15:06.209826 containerd[1676]: time="2025-01-17T12:15:06.209785330Z" level=warning msg="cleaning up after shim disconnected" id=180450550fed28419bd2aaf8d0cc55737ad0969b45d2ef39cd775c9953ccca68 namespace=k8s.io Jan 17 12:15:06.209826 containerd[1676]: time="2025-01-17T12:15:06.209816090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:15:06.768603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-180450550fed28419bd2aaf8d0cc55737ad0969b45d2ef39cd775c9953ccca68-rootfs.mount: Deactivated successfully. Jan 17 12:15:07.041611 containerd[1676]: time="2025-01-17T12:15:07.041453516Z" level=info msg="CreateContainer within sandbox \"2b730aad4c47431efdafde5ddaa82fb658ab874b26efcb9efbc5217e9e9ca36d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:15:07.084272 containerd[1676]: time="2025-01-17T12:15:07.084208931Z" level=info msg="CreateContainer within sandbox \"2b730aad4c47431efdafde5ddaa82fb658ab874b26efcb9efbc5217e9e9ca36d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8603ef4a38eb7e5b6ac12d9eec476ecb3442d889c854ecfc7bc3b6a4721488a1\"" Jan 17 12:15:07.085104 containerd[1676]: time="2025-01-17T12:15:07.085074692Z" level=info msg="StartContainer for \"8603ef4a38eb7e5b6ac12d9eec476ecb3442d889c854ecfc7bc3b6a4721488a1\"" Jan 17 12:15:07.117086 systemd[1]: Started cri-containerd-8603ef4a38eb7e5b6ac12d9eec476ecb3442d889c854ecfc7bc3b6a4721488a1.scope - libcontainer container 8603ef4a38eb7e5b6ac12d9eec476ecb3442d889c854ecfc7bc3b6a4721488a1. Jan 17 12:15:07.143735 systemd[1]: cri-containerd-8603ef4a38eb7e5b6ac12d9eec476ecb3442d889c854ecfc7bc3b6a4721488a1.scope: Deactivated successfully. Jan 17 12:15:07.144589 containerd[1676]: time="2025-01-17T12:15:07.144441768Z" level=info msg="StartContainer for \"8603ef4a38eb7e5b6ac12d9eec476ecb3442d889c854ecfc7bc3b6a4721488a1\" returns successfully" Jan 17 12:15:07.179304 containerd[1676]: time="2025-01-17T12:15:07.179095533Z" level=info msg="shim disconnected" id=8603ef4a38eb7e5b6ac12d9eec476ecb3442d889c854ecfc7bc3b6a4721488a1 namespace=k8s.io Jan 17 12:15:07.179304 containerd[1676]: time="2025-01-17T12:15:07.179150493Z" level=warning msg="cleaning up after shim disconnected" id=8603ef4a38eb7e5b6ac12d9eec476ecb3442d889c854ecfc7bc3b6a4721488a1 namespace=k8s.io Jan 17 12:15:07.179304 containerd[1676]: time="2025-01-17T12:15:07.179158853Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:15:07.768711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8603ef4a38eb7e5b6ac12d9eec476ecb3442d889c854ecfc7bc3b6a4721488a1-rootfs.mount: Deactivated successfully. Jan 17 12:15:08.051044 containerd[1676]: time="2025-01-17T12:15:08.050426650Z" level=info msg="CreateContainer within sandbox \"2b730aad4c47431efdafde5ddaa82fb658ab874b26efcb9efbc5217e9e9ca36d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:15:08.090818 containerd[1676]: time="2025-01-17T12:15:08.090719262Z" level=info msg="CreateContainer within sandbox \"2b730aad4c47431efdafde5ddaa82fb658ab874b26efcb9efbc5217e9e9ca36d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2b959e53a626844ed3dc3bdbabbbae6ac0112e20283311e09956df67241af06d\"" Jan 17 12:15:08.091556 containerd[1676]: time="2025-01-17T12:15:08.091437183Z" level=info msg="StartContainer for \"2b959e53a626844ed3dc3bdbabbbae6ac0112e20283311e09956df67241af06d\"" Jan 17 12:15:08.121146 systemd[1]: Started cri-containerd-2b959e53a626844ed3dc3bdbabbbae6ac0112e20283311e09956df67241af06d.scope - libcontainer container 2b959e53a626844ed3dc3bdbabbbae6ac0112e20283311e09956df67241af06d. Jan 17 12:15:08.147006 systemd[1]: cri-containerd-2b959e53a626844ed3dc3bdbabbbae6ac0112e20283311e09956df67241af06d.scope: Deactivated successfully. Jan 17 12:15:08.153873 containerd[1676]: time="2025-01-17T12:15:08.153709022Z" level=info msg="StartContainer for \"2b959e53a626844ed3dc3bdbabbbae6ac0112e20283311e09956df67241af06d\" returns successfully" Jan 17 12:15:08.184912 containerd[1676]: time="2025-01-17T12:15:08.184843502Z" level=info msg="shim disconnected" id=2b959e53a626844ed3dc3bdbabbbae6ac0112e20283311e09956df67241af06d namespace=k8s.io Jan 17 12:15:08.184912 containerd[1676]: time="2025-01-17T12:15:08.184901902Z" level=warning msg="cleaning up after shim disconnected" id=2b959e53a626844ed3dc3bdbabbbae6ac0112e20283311e09956df67241af06d namespace=k8s.io Jan 17 12:15:08.184912 containerd[1676]: time="2025-01-17T12:15:08.184912022Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:15:08.768819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b959e53a626844ed3dc3bdbabbbae6ac0112e20283311e09956df67241af06d-rootfs.mount: Deactivated successfully. Jan 17 12:15:09.050326 containerd[1676]: time="2025-01-17T12:15:09.049931772Z" level=info msg="CreateContainer within sandbox \"2b730aad4c47431efdafde5ddaa82fb658ab874b26efcb9efbc5217e9e9ca36d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:15:09.089320 containerd[1676]: time="2025-01-17T12:15:09.089262462Z" level=info msg="CreateContainer within sandbox \"2b730aad4c47431efdafde5ddaa82fb658ab874b26efcb9efbc5217e9e9ca36d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"58f174ce2dfd14b10bbb482c02de6671c73da315387b01d2a0f42fa3f3aed1b0\"" Jan 17 12:15:09.090843 containerd[1676]: time="2025-01-17T12:15:09.089972023Z" level=info msg="StartContainer for \"58f174ce2dfd14b10bbb482c02de6671c73da315387b01d2a0f42fa3f3aed1b0\"" Jan 17 12:15:09.117027 systemd[1]: Started cri-containerd-58f174ce2dfd14b10bbb482c02de6671c73da315387b01d2a0f42fa3f3aed1b0.scope - libcontainer container 58f174ce2dfd14b10bbb482c02de6671c73da315387b01d2a0f42fa3f3aed1b0. Jan 17 12:15:09.152317 containerd[1676]: time="2025-01-17T12:15:09.152165463Z" level=info msg="StartContainer for \"58f174ce2dfd14b10bbb482c02de6671c73da315387b01d2a0f42fa3f3aed1b0\" returns successfully" Jan 17 12:15:09.631882 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 17 12:15:10.076489 kubelet[3189]: I0117 12:15:10.076419 3189 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nxsfv" podStartSLOduration=6.076392328 podStartE2EDuration="6.076392328s" podCreationTimestamp="2025-01-17 12:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:15:10.076150807 +0000 UTC m=+210.787974809" watchObservedRunningTime="2025-01-17 12:15:10.076392328 +0000 UTC m=+210.788216370" Jan 17 12:15:12.430116 systemd-networkd[1409]: lxc_health: Link UP Jan 17 12:15:12.451984 systemd-networkd[1409]: lxc_health: Gained carrier Jan 17 12:15:12.473358 systemd[1]: run-containerd-runc-k8s.io-58f174ce2dfd14b10bbb482c02de6671c73da315387b01d2a0f42fa3f3aed1b0-runc.giJN9K.mount: Deactivated successfully. Jan 17 12:15:13.650935 systemd-networkd[1409]: lxc_health: Gained IPv6LL Jan 17 12:15:19.018155 sshd[5062]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:19.021050 systemd[1]: sshd@25-10.200.20.31:22-10.200.16.10:39566.service: Deactivated successfully. Jan 17 12:15:19.023711 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 12:15:19.026026 systemd-logind[1641]: Session 28 logged out. Waiting for processes to exit. Jan 17 12:15:19.027206 systemd-logind[1641]: Removed session 28.