Jan 30 13:22:30.425398 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 13:22:30.425425 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 30 13:22:30.425433 kernel: KASLR enabled Jan 30 13:22:30.425439 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 30 13:22:30.425447 kernel: printk: bootconsole [pl11] enabled Jan 30 13:22:30.425453 kernel: efi: EFI v2.7 by EDK II Jan 30 13:22:30.425460 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jan 30 13:22:30.425466 kernel: random: crng init done Jan 30 13:22:30.425472 kernel: secureboot: Secure boot disabled Jan 30 13:22:30.425478 kernel: ACPI: Early table checksum verification disabled Jan 30 13:22:30.425483 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 30 13:22:30.425489 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:22:30.425495 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:22:30.425503 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 30 13:22:30.425510 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:22:30.425516 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:22:30.425522 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:22:30.425530 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:22:30.425536 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:22:30.425542 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:22:30.425548 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 30 13:22:30.425554 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:22:30.425560 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 30 13:22:30.425566 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 30 13:22:30.425572 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 30 13:22:30.425578 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 30 13:22:30.425585 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 30 13:22:30.425590 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 30 13:22:30.425598 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 30 13:22:30.425604 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 30 13:22:30.425610 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 30 13:22:30.425616 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 30 13:22:30.425622 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 30 13:22:30.425628 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 30 13:22:30.425634 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 30 13:22:30.425640 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jan 30 13:22:30.425646 kernel: Zone ranges: Jan 30 13:22:30.425652 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 30 13:22:30.425658 kernel: DMA32 empty Jan 30 13:22:30.425664 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 13:22:30.425674 kernel: Movable zone start for each node Jan 30 13:22:30.425680 kernel: Early memory node ranges Jan 30 13:22:30.425686 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 30 13:22:30.425693 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jan 30 13:22:30.425699 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jan 30 13:22:30.425707 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jan 30 13:22:30.425714 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 30 13:22:30.425720 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 30 13:22:30.425726 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 30 13:22:30.425733 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 30 13:22:30.425739 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 13:22:30.425746 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 30 13:22:30.425752 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 30 13:22:30.425759 kernel: psci: probing for conduit method from ACPI. Jan 30 13:22:30.425765 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 13:22:30.425772 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 13:22:30.425778 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 30 13:22:30.425786 kernel: psci: SMC Calling Convention v1.4 Jan 30 13:22:30.425792 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 30 13:22:30.425799 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 30 13:22:30.425805 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 13:22:30.425812 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 13:22:30.425818 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 13:22:30.425825 kernel: Detected PIPT I-cache on CPU0 Jan 30 13:22:30.425845 kernel: CPU features: detected: GIC system register CPU interface Jan 30 13:22:30.425852 kernel: CPU features: detected: Hardware dirty bit management Jan 30 13:22:30.425858 kernel: CPU features: detected: Spectre-BHB Jan 30 13:22:30.425864 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 13:22:30.425873 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 13:22:30.425879 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 13:22:30.425886 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 30 13:22:30.425892 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 13:22:30.425898 kernel: alternatives: applying boot alternatives Jan 30 13:22:30.425906 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:22:30.425913 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:22:30.425920 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:22:30.425926 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:22:30.425933 kernel: Fallback order for Node 0: 0 Jan 30 13:22:30.425939 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 30 13:22:30.425948 kernel: Policy zone: Normal Jan 30 13:22:30.425954 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:22:30.425961 kernel: software IO TLB: area num 2. Jan 30 13:22:30.425967 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jan 30 13:22:30.425974 kernel: Memory: 3982052K/4194160K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 212108K reserved, 0K cma-reserved) Jan 30 13:22:30.425981 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:22:30.425987 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:22:30.425994 kernel: rcu: RCU event tracing is enabled. Jan 30 13:22:30.426001 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:22:30.426007 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:22:30.426014 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:22:30.426022 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:22:30.426029 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:22:30.426035 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 13:22:30.426042 kernel: GICv3: 960 SPIs implemented Jan 30 13:22:30.426048 kernel: GICv3: 0 Extended SPIs implemented Jan 30 13:22:30.426054 kernel: Root IRQ handler: gic_handle_irq Jan 30 13:22:30.426061 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 13:22:30.426068 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 30 13:22:30.426074 kernel: ITS: No ITS available, not enabling LPIs Jan 30 13:22:30.426081 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:22:30.426088 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:22:30.426094 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 13:22:30.426102 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 13:22:30.426109 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 13:22:30.426116 kernel: Console: colour dummy device 80x25 Jan 30 13:22:30.426124 kernel: printk: console [tty1] enabled Jan 30 13:22:30.426131 kernel: ACPI: Core revision 20230628 Jan 30 13:22:30.426137 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 13:22:30.426144 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:22:30.426151 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:22:30.426157 kernel: landlock: Up and running. Jan 30 13:22:30.426166 kernel: SELinux: Initializing. Jan 30 13:22:30.426172 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:22:30.426179 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:22:30.426186 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:22:30.426192 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:22:30.426199 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 30 13:22:30.426206 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 30 13:22:30.426220 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 13:22:30.426227 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:22:30.426234 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:22:30.426244 kernel: Remapping and enabling EFI services. Jan 30 13:22:30.426251 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:22:30.426260 kernel: Detected PIPT I-cache on CPU1 Jan 30 13:22:30.426267 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 30 13:22:30.426274 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:22:30.426281 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 13:22:30.426288 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:22:30.426297 kernel: SMP: Total of 2 processors activated. Jan 30 13:22:30.426304 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 13:22:30.426311 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 30 13:22:30.426318 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 13:22:30.426325 kernel: CPU features: detected: CRC32 instructions Jan 30 13:22:30.426332 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 13:22:30.432886 kernel: CPU features: detected: LSE atomic instructions Jan 30 13:22:30.432905 kernel: CPU features: detected: Privileged Access Never Jan 30 13:22:30.432912 kernel: CPU: All CPU(s) started at EL1 Jan 30 13:22:30.432927 kernel: alternatives: applying system-wide alternatives Jan 30 13:22:30.432934 kernel: devtmpfs: initialized Jan 30 13:22:30.432942 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:22:30.432949 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:22:30.432956 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:22:30.432963 kernel: SMBIOS 3.1.0 present. Jan 30 13:22:30.432970 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 30 13:22:30.432978 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:22:30.432985 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 13:22:30.432994 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 13:22:30.433001 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 13:22:30.433009 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:22:30.433016 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 30 13:22:30.433023 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:22:30.433030 kernel: cpuidle: using governor menu Jan 30 13:22:30.433038 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 13:22:30.433045 kernel: ASID allocator initialised with 32768 entries Jan 30 13:22:30.433052 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:22:30.433061 kernel: Serial: AMBA PL011 UART driver Jan 30 13:22:30.433068 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 13:22:30.433075 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 13:22:30.433082 kernel: Modules: 508880 pages in range for PLT usage Jan 30 13:22:30.433089 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:22:30.433096 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:22:30.433103 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 13:22:30.433110 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 13:22:30.433118 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:22:30.433127 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:22:30.433134 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 13:22:30.433141 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 13:22:30.433148 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:22:30.433155 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:22:30.433162 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:22:30.433169 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:22:30.433176 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:22:30.433183 kernel: ACPI: Interpreter enabled Jan 30 13:22:30.433192 kernel: ACPI: Using GIC for interrupt routing Jan 30 13:22:30.433199 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 30 13:22:30.433206 kernel: printk: console [ttyAMA0] enabled Jan 30 13:22:30.433214 kernel: printk: bootconsole [pl11] disabled Jan 30 13:22:30.433221 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 30 13:22:30.433228 kernel: iommu: Default domain type: Translated Jan 30 13:22:30.433235 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 13:22:30.433242 kernel: efivars: Registered efivars operations Jan 30 13:22:30.433249 kernel: vgaarb: loaded Jan 30 13:22:30.433258 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 13:22:30.433265 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:22:30.433272 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:22:30.433279 kernel: pnp: PnP ACPI init Jan 30 13:22:30.433286 kernel: pnp: PnP ACPI: found 0 devices Jan 30 13:22:30.433293 kernel: NET: Registered PF_INET protocol family Jan 30 13:22:30.433301 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:22:30.433308 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:22:30.433315 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:22:30.433325 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:22:30.433332 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:22:30.433350 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:22:30.433358 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:22:30.433365 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:22:30.433372 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:22:30.433379 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:22:30.433386 kernel: kvm [1]: HYP mode not available Jan 30 13:22:30.433393 kernel: Initialise system trusted keyrings Jan 30 13:22:30.433402 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:22:30.433409 kernel: Key type asymmetric registered Jan 30 13:22:30.433416 kernel: Asymmetric key parser 'x509' registered Jan 30 13:22:30.433423 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 13:22:30.433430 kernel: io scheduler mq-deadline registered Jan 30 13:22:30.433437 kernel: io scheduler kyber registered Jan 30 13:22:30.433444 kernel: io scheduler bfq registered Jan 30 13:22:30.433451 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:22:30.433458 kernel: thunder_xcv, ver 1.0 Jan 30 13:22:30.433467 kernel: thunder_bgx, ver 1.0 Jan 30 13:22:30.433474 kernel: nicpf, ver 1.0 Jan 30 13:22:30.433482 kernel: nicvf, ver 1.0 Jan 30 13:22:30.433651 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 13:22:30.433723 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T13:22:29 UTC (1738243349) Jan 30 13:22:30.433733 kernel: efifb: probing for efifb Jan 30 13:22:30.433741 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 13:22:30.433748 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 13:22:30.433758 kernel: efifb: scrolling: redraw Jan 30 13:22:30.433765 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 13:22:30.433772 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:22:30.433779 kernel: fb0: EFI VGA frame buffer device Jan 30 13:22:30.433786 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 30 13:22:30.433793 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:22:30.433800 kernel: No ACPI PMU IRQ for CPU0 Jan 30 13:22:30.433807 kernel: No ACPI PMU IRQ for CPU1 Jan 30 13:22:30.433814 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 30 13:22:30.433823 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 13:22:30.433830 kernel: watchdog: Hard watchdog permanently disabled Jan 30 13:22:30.433837 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:22:30.433844 kernel: Segment Routing with IPv6 Jan 30 13:22:30.433851 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:22:30.433858 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:22:30.433865 kernel: Key type dns_resolver registered Jan 30 13:22:30.433872 kernel: registered taskstats version 1 Jan 30 13:22:30.433879 kernel: Loading compiled-in X.509 certificates Jan 30 13:22:30.433888 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 30 13:22:30.433896 kernel: Key type .fscrypt registered Jan 30 13:22:30.433903 kernel: Key type fscrypt-provisioning registered Jan 30 13:22:30.433910 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:22:30.433917 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:22:30.433924 kernel: ima: No architecture policies found Jan 30 13:22:30.433931 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 13:22:30.433938 kernel: clk: Disabling unused clocks Jan 30 13:22:30.433945 kernel: Freeing unused kernel memory: 39936K Jan 30 13:22:30.433954 kernel: Run /init as init process Jan 30 13:22:30.433961 kernel: with arguments: Jan 30 13:22:30.433968 kernel: /init Jan 30 13:22:30.433975 kernel: with environment: Jan 30 13:22:30.433982 kernel: HOME=/ Jan 30 13:22:30.433989 kernel: TERM=linux Jan 30 13:22:30.433996 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:22:30.434005 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:22:30.434016 systemd[1]: Detected virtualization microsoft. Jan 30 13:22:30.434024 systemd[1]: Detected architecture arm64. Jan 30 13:22:30.434031 systemd[1]: Running in initrd. Jan 30 13:22:30.434039 systemd[1]: No hostname configured, using default hostname. Jan 30 13:22:30.434047 systemd[1]: Hostname set to . Jan 30 13:22:30.434055 systemd[1]: Initializing machine ID from random generator. Jan 30 13:22:30.434062 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:22:30.434070 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:22:30.434080 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:22:30.434088 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:22:30.434096 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:22:30.434104 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:22:30.434112 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:22:30.434121 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:22:30.434131 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:22:30.434139 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:22:30.434146 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:22:30.434154 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:22:30.434161 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:22:30.434169 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:22:30.434177 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:22:30.434184 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:22:30.434192 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:22:30.434202 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:22:30.434210 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:22:30.434218 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:22:30.434225 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:22:30.434233 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:22:30.434241 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:22:30.434249 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:22:30.434257 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:22:30.434266 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:22:30.434274 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:22:30.434281 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:22:30.434289 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:22:30.434317 systemd-journald[218]: Collecting audit messages is disabled. Jan 30 13:22:30.434366 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:30.434376 systemd-journald[218]: Journal started Jan 30 13:22:30.434400 systemd-journald[218]: Runtime Journal (/run/log/journal/3afada14e3f14c159fbc868f4d170f63) is 8.0M, max 78.5M, 70.5M free. Jan 30 13:22:30.434799 systemd-modules-load[219]: Inserted module 'overlay' Jan 30 13:22:30.452317 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:22:30.453045 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:22:30.474053 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:22:30.510694 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:22:30.510723 kernel: Bridge firewalling registered Jan 30 13:22:30.502464 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:22:30.509739 systemd-modules-load[219]: Inserted module 'br_netfilter' Jan 30 13:22:30.515851 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:22:30.527604 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:30.558745 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:22:30.567807 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:22:30.602506 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:22:30.611595 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:22:30.635605 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:30.644120 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:22:30.658204 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:22:30.671859 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:22:30.701656 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:22:30.717598 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:22:30.739023 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:22:30.758717 dracut-cmdline[251]: dracut-dracut-053 Jan 30 13:22:30.758717 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:22:30.810592 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:22:30.815054 systemd-resolved[257]: Positive Trust Anchors: Jan 30 13:22:30.815065 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:22:30.815100 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:22:30.817473 systemd-resolved[257]: Defaulting to hostname 'linux'. Jan 30 13:22:30.821903 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:22:30.837228 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:22:30.969385 kernel: SCSI subsystem initialized Jan 30 13:22:30.978367 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:22:30.991364 kernel: iscsi: registered transport (tcp) Jan 30 13:22:31.015565 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:22:31.015639 kernel: QLogic iSCSI HBA Driver Jan 30 13:22:31.066660 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:22:31.085000 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:22:31.119509 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:22:31.119547 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:22:31.126368 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:22:31.175362 kernel: raid6: neonx8 gen() 15789 MB/s Jan 30 13:22:31.195350 kernel: raid6: neonx4 gen() 15808 MB/s Jan 30 13:22:31.215348 kernel: raid6: neonx2 gen() 13198 MB/s Jan 30 13:22:31.236353 kernel: raid6: neonx1 gen() 10502 MB/s Jan 30 13:22:31.256348 kernel: raid6: int64x8 gen() 6798 MB/s Jan 30 13:22:31.276349 kernel: raid6: int64x4 gen() 7347 MB/s Jan 30 13:22:31.297349 kernel: raid6: int64x2 gen() 6111 MB/s Jan 30 13:22:31.320589 kernel: raid6: int64x1 gen() 5061 MB/s Jan 30 13:22:31.320610 kernel: raid6: using algorithm neonx4 gen() 15808 MB/s Jan 30 13:22:31.346242 kernel: raid6: .... xor() 12123 MB/s, rmw enabled Jan 30 13:22:31.346255 kernel: raid6: using neon recovery algorithm Jan 30 13:22:31.358600 kernel: xor: measuring software checksum speed Jan 30 13:22:31.358619 kernel: 8regs : 21426 MB/sec Jan 30 13:22:31.362382 kernel: 32regs : 21647 MB/sec Jan 30 13:22:31.365808 kernel: arm64_neon : 27729 MB/sec Jan 30 13:22:31.370096 kernel: xor: using function: arm64_neon (27729 MB/sec) Jan 30 13:22:31.423360 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:22:31.434496 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:22:31.452557 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:22:31.487285 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 30 13:22:31.492919 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:22:31.523734 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:22:31.541794 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Jan 30 13:22:31.571799 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:22:31.591622 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:22:31.634690 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:22:31.656566 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:22:31.694250 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:22:31.710056 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:22:31.726628 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:22:31.743875 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:22:31.763840 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:22:31.776006 kernel: hv_vmbus: Vmbus version:5.3 Jan 30 13:22:31.786379 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 13:22:31.805397 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:22:31.852955 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 30 13:22:31.852986 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:22:31.852996 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 13:22:31.853165 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:22:31.853176 kernel: PTP clock support registered Jan 30 13:22:31.853185 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 13:22:31.805577 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:31.897136 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 30 13:22:31.897162 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 13:22:31.897181 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 13:22:31.897190 kernel: hv_vmbus: registering driver hv_utils Jan 30 13:22:31.840147 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:22:31.926551 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 13:22:31.926580 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 13:22:31.926589 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 13:22:31.926598 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 13:22:31.867415 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:22:31.953667 kernel: scsi host0: storvsc_host_t Jan 30 13:22:31.867653 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:31.991287 kernel: scsi host1: storvsc_host_t Jan 30 13:22:31.991527 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 13:22:31.991548 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 13:22:31.933491 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:31.953082 systemd-resolved[257]: Clock change detected. Flushing caches. Jan 30 13:22:32.019704 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:32.031865 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:22:32.062536 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:32.130975 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 30 13:22:32.142886 kernel: hv_netvsc 0022487b-0937-0022-487b-09370022487b eth0: VF slot 1 added Jan 30 13:22:32.143062 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:22:32.143098 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 30 13:22:32.079649 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:22:32.079977 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:32.176977 kernel: hv_vmbus: registering driver hv_pci Jan 30 13:22:32.177001 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 13:22:32.272038 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:22:32.272414 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:22:32.276381 kernel: hv_pci 0ec50c40-8cc6-489a-a6bb-a95e3d53d8b3: PCI VMBus probing: Using version 0x10004 Jan 30 13:22:32.361953 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 13:22:32.362122 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 13:22:32.362221 kernel: hv_pci 0ec50c40-8cc6-489a-a6bb-a95e3d53d8b3: PCI host bridge to bus 8cc6:00 Jan 30 13:22:32.362306 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:22:32.362317 kernel: pci_bus 8cc6:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 30 13:22:32.362413 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:22:32.362498 kernel: pci_bus 8cc6:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 13:22:32.362572 kernel: pci 8cc6:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 30 13:22:32.362667 kernel: pci 8cc6:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 13:22:32.362749 kernel: pci 8cc6:00:02.0: enabling Extended Tags Jan 30 13:22:32.362827 kernel: pci 8cc6:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8cc6:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 30 13:22:32.362907 kernel: pci_bus 8cc6:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 13:22:32.362982 kernel: pci 8cc6:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 13:22:32.099442 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:32.183826 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:32.251864 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:32.286404 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:22:32.360151 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:32.422493 kernel: mlx5_core 8cc6:00:02.0: enabling device (0000 -> 0002) Jan 30 13:22:32.750824 kernel: mlx5_core 8cc6:00:02.0: firmware version: 16.30.1284 Jan 30 13:22:32.750998 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (501) Jan 30 13:22:32.751009 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (485) Jan 30 13:22:32.751019 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:22:32.751037 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:22:32.751046 kernel: hv_netvsc 0022487b-0937-0022-487b-09370022487b eth0: VF registering: eth1 Jan 30 13:22:32.751164 kernel: mlx5_core 8cc6:00:02.0 eth1: joined to eth0 Jan 30 13:22:32.751263 kernel: mlx5_core 8cc6:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 30 13:22:32.485229 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 13:22:32.767589 kernel: mlx5_core 8cc6:00:02.0 enP36038s1: renamed from eth1 Jan 30 13:22:32.519828 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 13:22:32.556943 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:22:32.607945 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 13:22:32.615197 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 13:22:32.629302 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:22:33.667743 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:22:33.667805 disk-uuid[602]: The operation has completed successfully. Jan 30 13:22:33.730443 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:22:33.732138 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:22:33.765263 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:22:33.779387 sh[692]: Success Jan 30 13:22:33.815160 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 13:22:34.153958 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:22:34.174573 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:22:34.187511 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:22:34.228429 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 30 13:22:34.228496 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:22:34.243131 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:22:34.243179 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:22:34.250518 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:22:34.754236 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:22:34.761624 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:22:34.791510 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:22:34.802534 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:22:34.863738 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:34.863761 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:22:34.863770 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:22:34.916190 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:22:34.939071 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:22:34.945066 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:34.952927 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:22:34.970430 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:22:35.003083 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:22:35.026309 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:22:35.051097 systemd-networkd[876]: lo: Link UP Jan 30 13:22:35.051239 systemd-networkd[876]: lo: Gained carrier Jan 30 13:22:35.052887 systemd-networkd[876]: Enumeration completed Jan 30 13:22:35.055798 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:22:35.056417 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:35.056421 systemd-networkd[876]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:35.069187 systemd[1]: Reached target network.target - Network. Jan 30 13:22:35.165133 kernel: mlx5_core 8cc6:00:02.0 enP36038s1: Link up Jan 30 13:22:35.213136 kernel: hv_netvsc 0022487b-0937-0022-487b-09370022487b eth0: Data path switched to VF: enP36038s1 Jan 30 13:22:35.213849 systemd-networkd[876]: enP36038s1: Link UP Jan 30 13:22:35.214103 systemd-networkd[876]: eth0: Link UP Jan 30 13:22:35.214520 systemd-networkd[876]: eth0: Gained carrier Jan 30 13:22:35.214532 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:35.244819 systemd-networkd[876]: enP36038s1: Gained carrier Jan 30 13:22:35.263185 systemd-networkd[876]: eth0: DHCPv4 address 10.200.20.42/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 13:22:36.325438 ignition[844]: Ignition 2.20.0 Jan 30 13:22:36.325449 ignition[844]: Stage: fetch-offline Jan 30 13:22:36.330965 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:22:36.325489 ignition[844]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:36.325498 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:22:36.325597 ignition[844]: parsed url from cmdline: "" Jan 30 13:22:36.361411 systemd-networkd[876]: eth0: Gained IPv6LL Jan 30 13:22:36.325600 ignition[844]: no config URL provided Jan 30 13:22:36.365421 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:22:36.325604 ignition[844]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:22:36.325611 ignition[844]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:22:36.325617 ignition[844]: failed to fetch config: resource requires networking Jan 30 13:22:36.325810 ignition[844]: Ignition finished successfully Jan 30 13:22:36.390269 ignition[887]: Ignition 2.20.0 Jan 30 13:22:36.390278 ignition[887]: Stage: fetch Jan 30 13:22:36.390505 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:36.390516 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:22:36.390623 ignition[887]: parsed url from cmdline: "" Jan 30 13:22:36.390627 ignition[887]: no config URL provided Jan 30 13:22:36.390631 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:22:36.390639 ignition[887]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:22:36.390668 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 13:22:36.556247 ignition[887]: GET result: OK Jan 30 13:22:36.556340 ignition[887]: config has been read from IMDS userdata Jan 30 13:22:36.556395 ignition[887]: parsing config with SHA512: e5a6dc2e3dca1d58db79274419554ff9247d6ff28e868cfea2457fdd295d10272f7be5d157e65969ff9b742f58362e09e8d090027318ae6ecdf64d1ef20447a0 Jan 30 13:22:36.561221 unknown[887]: fetched base config from "system" Jan 30 13:22:36.561229 unknown[887]: fetched base config from "system" Jan 30 13:22:36.566407 ignition[887]: fetch: fetch complete Jan 30 13:22:36.561234 unknown[887]: fetched user config from "azure" Jan 30 13:22:36.566414 ignition[887]: fetch: fetch passed Jan 30 13:22:36.573023 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:22:36.566481 ignition[887]: Ignition finished successfully Jan 30 13:22:36.599356 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:22:36.620098 ignition[893]: Ignition 2.20.0 Jan 30 13:22:36.623507 ignition[893]: Stage: kargs Jan 30 13:22:36.629536 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:22:36.623764 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:36.623776 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:22:36.624782 ignition[893]: kargs: kargs passed Jan 30 13:22:36.624833 ignition[893]: Ignition finished successfully Jan 30 13:22:36.663632 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:22:36.685881 ignition[899]: Ignition 2.20.0 Jan 30 13:22:36.685893 ignition[899]: Stage: disks Jan 30 13:22:36.691029 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:22:36.686065 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:36.698349 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:22:36.686075 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:22:36.709125 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:22:36.687046 ignition[899]: disks: disks passed Jan 30 13:22:36.723437 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:22:36.687095 ignition[899]: Ignition finished successfully Jan 30 13:22:36.736349 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:22:36.749815 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:22:36.782392 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:22:36.818496 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 13:22:36.827273 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:22:36.846445 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:22:36.906134 kernel: EXT4-fs (sda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 30 13:22:36.906596 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:22:36.912320 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:22:36.935390 systemd-networkd[876]: enP36038s1: Gained IPv6LL Jan 30 13:22:36.941359 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:22:36.954857 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:22:36.974634 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (919) Jan 30 13:22:36.984006 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:22:37.012625 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:37.012653 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:22:37.012663 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:22:36.999331 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:22:36.999371 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:22:37.021588 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:22:37.060795 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:22:37.075440 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:22:37.070178 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:22:37.736609 coreos-metadata[921]: Jan 30 13:22:37.736 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:22:37.757527 coreos-metadata[921]: Jan 30 13:22:37.757 INFO Fetch successful Jan 30 13:22:37.765715 coreos-metadata[921]: Jan 30 13:22:37.765 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:22:37.793956 coreos-metadata[921]: Jan 30 13:22:37.793 INFO Fetch successful Jan 30 13:22:37.800575 coreos-metadata[921]: Jan 30 13:22:37.800 INFO wrote hostname ci-4186.1.0-a-4db8cd7df2 to /sysroot/etc/hostname Jan 30 13:22:37.810980 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:22:38.126144 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:22:38.231343 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:22:38.243183 initrd-setup-root[964]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:22:38.255644 initrd-setup-root[971]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:22:39.570926 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:22:39.587623 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:22:39.600380 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:22:39.622499 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:39.617288 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:22:39.652145 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:22:39.671149 ignition[1039]: INFO : Ignition 2.20.0 Jan 30 13:22:39.671149 ignition[1039]: INFO : Stage: mount Jan 30 13:22:39.671149 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:39.671149 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:22:39.702770 ignition[1039]: INFO : mount: mount passed Jan 30 13:22:39.702770 ignition[1039]: INFO : Ignition finished successfully Jan 30 13:22:39.681173 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:22:39.707343 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:22:39.729443 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:22:39.756135 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1050) Jan 30 13:22:39.770812 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:39.770868 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:22:39.775328 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:22:39.783128 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:22:39.784730 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:22:39.814756 ignition[1067]: INFO : Ignition 2.20.0 Jan 30 13:22:39.814756 ignition[1067]: INFO : Stage: files Jan 30 13:22:39.824072 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:39.824072 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:22:39.824072 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:22:39.824072 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:22:39.824072 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:22:39.924100 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:22:39.932267 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:22:39.932267 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:22:39.929584 unknown[1067]: wrote ssh authorized keys file for user: core Jan 30 13:22:39.976178 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:22:39.989032 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 13:22:40.010126 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:22:40.100275 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:22:40.100275 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:22:40.123237 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 30 13:22:40.567867 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:22:40.634484 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:22:40.645866 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:22:40.645866 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:22:40.645866 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:22:40.645866 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:22:40.645866 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:22:40.704055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:22:40.704055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:22:40.704055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:22:40.704055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:22:40.704055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:22:40.704055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:22:40.704055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:22:40.704055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:22:40.704055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 30 13:22:41.064035 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:22:41.246218 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:22:41.246218 ignition[1067]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:22:41.271043 ignition[1067]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:22:41.285442 ignition[1067]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:22:41.285442 ignition[1067]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:22:41.285442 ignition[1067]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:22:41.285442 ignition[1067]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:22:41.285442 ignition[1067]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:22:41.285442 ignition[1067]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:22:41.285442 ignition[1067]: INFO : files: files passed Jan 30 13:22:41.285442 ignition[1067]: INFO : Ignition finished successfully Jan 30 13:22:41.284255 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:22:41.331392 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:22:41.351330 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:22:41.379731 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:22:41.432291 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:22:41.432291 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:22:41.379830 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:22:41.461726 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:22:41.398134 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:22:41.411411 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:22:41.454317 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:22:41.511234 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:22:41.511372 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:22:41.526305 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:22:41.543821 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:22:41.559148 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:22:41.581432 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:22:41.609756 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:22:41.632436 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:22:41.658068 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:22:41.676721 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:22:41.698685 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:22:41.707082 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:22:41.707311 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:22:41.734195 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:22:41.742985 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:22:41.758696 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:22:41.774298 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:22:41.787987 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:22:41.802165 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:22:41.814353 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:22:41.827551 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:22:41.839146 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:22:41.851189 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:22:41.860904 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:22:41.861080 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:22:41.877390 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:22:41.888407 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:22:41.900641 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:22:41.900751 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:22:41.914133 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:22:41.914304 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:22:41.931280 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:22:41.931468 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:22:41.947320 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:22:41.947471 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:22:41.958080 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:22:41.958253 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:22:42.021166 ignition[1120]: INFO : Ignition 2.20.0 Jan 30 13:22:42.021166 ignition[1120]: INFO : Stage: umount Jan 30 13:22:42.021166 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:42.021166 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:22:42.021166 ignition[1120]: INFO : umount: umount passed Jan 30 13:22:42.021166 ignition[1120]: INFO : Ignition finished successfully Jan 30 13:22:41.991271 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:22:42.006200 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:22:42.006430 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:22:42.023509 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:22:42.029753 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:22:42.029938 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:22:42.039401 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:22:42.039519 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:22:42.053500 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:22:42.053606 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:22:42.061931 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:22:42.062045 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:22:42.082791 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:22:42.082857 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:22:42.097490 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:22:42.097542 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:22:42.108073 systemd[1]: Stopped target network.target - Network. Jan 30 13:22:42.124447 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:22:42.124523 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:22:42.139117 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:22:42.144241 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:22:42.151134 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:22:42.164743 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:22:42.175879 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:22:42.186314 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:22:42.186374 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:22:42.200386 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:22:42.200442 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:22:42.212526 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:22:42.212591 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:22:42.223356 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:22:42.223401 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:22:42.234768 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:22:42.246081 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:22:42.251944 systemd-networkd[876]: eth0: DHCPv6 lease lost Jan 30 13:22:42.266860 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:22:42.267524 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:22:42.267651 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:22:42.277720 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:22:42.277816 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:22:42.290667 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:22:42.532360 kernel: hv_netvsc 0022487b-0937-0022-487b-09370022487b eth0: Data path switched from VF: enP36038s1 Jan 30 13:22:42.292876 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:22:42.305734 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:22:42.305811 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:22:42.326353 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:22:42.337812 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:22:42.337901 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:22:42.352661 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:22:42.352724 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:22:42.363005 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:22:42.363058 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:22:42.374188 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:22:42.374241 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:22:42.385402 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:22:42.428216 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:22:42.428415 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:22:42.441718 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:22:42.441766 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:22:42.455951 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:22:42.455990 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:22:42.467961 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:22:42.468018 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:22:42.483677 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:22:42.483742 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:22:42.501212 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:22:42.501279 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:42.551398 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:22:42.566196 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:22:42.566303 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:22:42.579168 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:22:42.579231 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:42.593165 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:22:42.593286 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:22:42.603591 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:22:42.603688 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:22:43.845206 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:22:43.845310 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:22:43.851000 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:22:43.862184 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:22:43.862254 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:22:43.890382 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:22:43.901454 systemd[1]: Switching root. Jan 30 13:22:44.220689 systemd-journald[218]: Journal stopped Jan 30 13:22:30.425398 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 13:22:30.425425 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 30 13:22:30.425433 kernel: KASLR enabled Jan 30 13:22:30.425439 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 30 13:22:30.425447 kernel: printk: bootconsole [pl11] enabled Jan 30 13:22:30.425453 kernel: efi: EFI v2.7 by EDK II Jan 30 13:22:30.425460 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jan 30 13:22:30.425466 kernel: random: crng init done Jan 30 13:22:30.425472 kernel: secureboot: Secure boot disabled Jan 30 13:22:30.425478 kernel: ACPI: Early table checksum verification disabled Jan 30 13:22:30.425483 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 30 13:22:30.425489 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:22:30.425495 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:22:30.425503 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 30 13:22:30.425510 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:22:30.425516 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:22:30.425522 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:22:30.425530 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:22:30.425536 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:22:30.425542 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:22:30.425548 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 30 13:22:30.425554 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:22:30.425560 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 30 13:22:30.425566 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 30 13:22:30.425572 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 30 13:22:30.425578 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 30 13:22:30.425585 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 30 13:22:30.425590 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 30 13:22:30.425598 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 30 13:22:30.425604 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 30 13:22:30.425610 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 30 13:22:30.425616 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 30 13:22:30.425622 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 30 13:22:30.425628 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 30 13:22:30.425634 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 30 13:22:30.425640 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jan 30 13:22:30.425646 kernel: Zone ranges: Jan 30 13:22:30.425652 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 30 13:22:30.425658 kernel: DMA32 empty Jan 30 13:22:30.425664 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 13:22:30.425674 kernel: Movable zone start for each node Jan 30 13:22:30.425680 kernel: Early memory node ranges Jan 30 13:22:30.425686 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 30 13:22:30.425693 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jan 30 13:22:30.425699 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jan 30 13:22:30.425707 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jan 30 13:22:30.425714 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 30 13:22:30.425720 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 30 13:22:30.425726 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 30 13:22:30.425733 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 30 13:22:30.425739 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 13:22:30.425746 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 30 13:22:30.425752 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 30 13:22:30.425759 kernel: psci: probing for conduit method from ACPI. Jan 30 13:22:30.425765 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 13:22:30.425772 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 13:22:30.425778 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 30 13:22:30.425786 kernel: psci: SMC Calling Convention v1.4 Jan 30 13:22:30.425792 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 30 13:22:30.425799 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 30 13:22:30.425805 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 13:22:30.425812 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 13:22:30.425818 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 13:22:30.425825 kernel: Detected PIPT I-cache on CPU0 Jan 30 13:22:30.425845 kernel: CPU features: detected: GIC system register CPU interface Jan 30 13:22:30.425852 kernel: CPU features: detected: Hardware dirty bit management Jan 30 13:22:30.425858 kernel: CPU features: detected: Spectre-BHB Jan 30 13:22:30.425864 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 13:22:30.425873 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 13:22:30.425879 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 13:22:30.425886 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 30 13:22:30.425892 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 13:22:30.425898 kernel: alternatives: applying boot alternatives Jan 30 13:22:30.425906 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:22:30.425913 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:22:30.425920 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:22:30.425926 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:22:30.425933 kernel: Fallback order for Node 0: 0 Jan 30 13:22:30.425939 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 30 13:22:30.425948 kernel: Policy zone: Normal Jan 30 13:22:30.425954 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:22:30.425961 kernel: software IO TLB: area num 2. Jan 30 13:22:30.425967 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jan 30 13:22:30.425974 kernel: Memory: 3982052K/4194160K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 212108K reserved, 0K cma-reserved) Jan 30 13:22:30.425981 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:22:30.425987 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:22:30.425994 kernel: rcu: RCU event tracing is enabled. Jan 30 13:22:30.426001 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:22:30.426007 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:22:30.426014 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:22:30.426022 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:22:30.426029 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:22:30.426035 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 13:22:30.426042 kernel: GICv3: 960 SPIs implemented Jan 30 13:22:30.426048 kernel: GICv3: 0 Extended SPIs implemented Jan 30 13:22:30.426054 kernel: Root IRQ handler: gic_handle_irq Jan 30 13:22:30.426061 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 13:22:30.426068 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 30 13:22:30.426074 kernel: ITS: No ITS available, not enabling LPIs Jan 30 13:22:30.426081 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:22:30.426088 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:22:30.426094 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 13:22:30.426102 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 13:22:30.426109 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 13:22:30.426116 kernel: Console: colour dummy device 80x25 Jan 30 13:22:30.426124 kernel: printk: console [tty1] enabled Jan 30 13:22:30.426131 kernel: ACPI: Core revision 20230628 Jan 30 13:22:30.426137 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 13:22:30.426144 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:22:30.426151 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:22:30.426157 kernel: landlock: Up and running. Jan 30 13:22:30.426166 kernel: SELinux: Initializing. Jan 30 13:22:30.426172 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:22:30.426179 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:22:30.426186 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:22:30.426192 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:22:30.426199 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 30 13:22:30.426206 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 30 13:22:30.426220 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 13:22:30.426227 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:22:30.426234 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:22:30.426244 kernel: Remapping and enabling EFI services. Jan 30 13:22:30.426251 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:22:30.426260 kernel: Detected PIPT I-cache on CPU1 Jan 30 13:22:30.426267 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 30 13:22:30.426274 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:22:30.426281 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 13:22:30.426288 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:22:30.426297 kernel: SMP: Total of 2 processors activated. Jan 30 13:22:30.426304 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 13:22:30.426311 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 30 13:22:30.426318 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 13:22:30.426325 kernel: CPU features: detected: CRC32 instructions Jan 30 13:22:30.426332 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 13:22:30.432886 kernel: CPU features: detected: LSE atomic instructions Jan 30 13:22:30.432905 kernel: CPU features: detected: Privileged Access Never Jan 30 13:22:30.432912 kernel: CPU: All CPU(s) started at EL1 Jan 30 13:22:30.432927 kernel: alternatives: applying system-wide alternatives Jan 30 13:22:30.432934 kernel: devtmpfs: initialized Jan 30 13:22:30.432942 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:22:30.432949 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:22:30.432956 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:22:30.432963 kernel: SMBIOS 3.1.0 present. Jan 30 13:22:30.432970 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 30 13:22:30.432978 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:22:30.432985 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 13:22:30.432994 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 13:22:30.433001 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 13:22:30.433009 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:22:30.433016 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 30 13:22:30.433023 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:22:30.433030 kernel: cpuidle: using governor menu Jan 30 13:22:30.433038 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 13:22:30.433045 kernel: ASID allocator initialised with 32768 entries Jan 30 13:22:30.433052 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:22:30.433061 kernel: Serial: AMBA PL011 UART driver Jan 30 13:22:30.433068 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 13:22:30.433075 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 13:22:30.433082 kernel: Modules: 508880 pages in range for PLT usage Jan 30 13:22:30.433089 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:22:30.433096 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:22:30.433103 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 13:22:30.433110 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 13:22:30.433118 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:22:30.433127 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:22:30.433134 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 13:22:30.433141 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 13:22:30.433148 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:22:30.433155 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:22:30.433162 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:22:30.433169 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:22:30.433176 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:22:30.433183 kernel: ACPI: Interpreter enabled Jan 30 13:22:30.433192 kernel: ACPI: Using GIC for interrupt routing Jan 30 13:22:30.433199 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 30 13:22:30.433206 kernel: printk: console [ttyAMA0] enabled Jan 30 13:22:30.433214 kernel: printk: bootconsole [pl11] disabled Jan 30 13:22:30.433221 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 30 13:22:30.433228 kernel: iommu: Default domain type: Translated Jan 30 13:22:30.433235 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 13:22:30.433242 kernel: efivars: Registered efivars operations Jan 30 13:22:30.433249 kernel: vgaarb: loaded Jan 30 13:22:30.433258 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 13:22:30.433265 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:22:30.433272 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:22:30.433279 kernel: pnp: PnP ACPI init Jan 30 13:22:30.433286 kernel: pnp: PnP ACPI: found 0 devices Jan 30 13:22:30.433293 kernel: NET: Registered PF_INET protocol family Jan 30 13:22:30.433301 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:22:30.433308 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:22:30.433315 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:22:30.433325 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:22:30.433332 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:22:30.433350 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:22:30.433358 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:22:30.433365 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:22:30.433372 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:22:30.433379 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:22:30.433386 kernel: kvm [1]: HYP mode not available Jan 30 13:22:30.433393 kernel: Initialise system trusted keyrings Jan 30 13:22:30.433402 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:22:30.433409 kernel: Key type asymmetric registered Jan 30 13:22:30.433416 kernel: Asymmetric key parser 'x509' registered Jan 30 13:22:30.433423 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 13:22:30.433430 kernel: io scheduler mq-deadline registered Jan 30 13:22:30.433437 kernel: io scheduler kyber registered Jan 30 13:22:30.433444 kernel: io scheduler bfq registered Jan 30 13:22:30.433451 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:22:30.433458 kernel: thunder_xcv, ver 1.0 Jan 30 13:22:30.433467 kernel: thunder_bgx, ver 1.0 Jan 30 13:22:30.433474 kernel: nicpf, ver 1.0 Jan 30 13:22:30.433482 kernel: nicvf, ver 1.0 Jan 30 13:22:30.433651 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 13:22:30.433723 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T13:22:29 UTC (1738243349) Jan 30 13:22:30.433733 kernel: efifb: probing for efifb Jan 30 13:22:30.433741 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 13:22:30.433748 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 13:22:30.433758 kernel: efifb: scrolling: redraw Jan 30 13:22:30.433765 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 13:22:30.433772 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:22:30.433779 kernel: fb0: EFI VGA frame buffer device Jan 30 13:22:30.433786 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 30 13:22:30.433793 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:22:30.433800 kernel: No ACPI PMU IRQ for CPU0 Jan 30 13:22:30.433807 kernel: No ACPI PMU IRQ for CPU1 Jan 30 13:22:30.433814 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 30 13:22:30.433823 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 13:22:30.433830 kernel: watchdog: Hard watchdog permanently disabled Jan 30 13:22:30.433837 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:22:30.433844 kernel: Segment Routing with IPv6 Jan 30 13:22:30.433851 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:22:30.433858 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:22:30.433865 kernel: Key type dns_resolver registered Jan 30 13:22:30.433872 kernel: registered taskstats version 1 Jan 30 13:22:30.433879 kernel: Loading compiled-in X.509 certificates Jan 30 13:22:30.433888 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 30 13:22:30.433896 kernel: Key type .fscrypt registered Jan 30 13:22:30.433903 kernel: Key type fscrypt-provisioning registered Jan 30 13:22:30.433910 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:22:30.433917 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:22:30.433924 kernel: ima: No architecture policies found Jan 30 13:22:30.433931 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 13:22:30.433938 kernel: clk: Disabling unused clocks Jan 30 13:22:30.433945 kernel: Freeing unused kernel memory: 39936K Jan 30 13:22:30.433954 kernel: Run /init as init process Jan 30 13:22:30.433961 kernel: with arguments: Jan 30 13:22:30.433968 kernel: /init Jan 30 13:22:30.433975 kernel: with environment: Jan 30 13:22:30.433982 kernel: HOME=/ Jan 30 13:22:30.433989 kernel: TERM=linux Jan 30 13:22:30.433996 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:22:30.434005 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:22:30.434016 systemd[1]: Detected virtualization microsoft. Jan 30 13:22:30.434024 systemd[1]: Detected architecture arm64. Jan 30 13:22:30.434031 systemd[1]: Running in initrd. Jan 30 13:22:30.434039 systemd[1]: No hostname configured, using default hostname. Jan 30 13:22:30.434047 systemd[1]: Hostname set to . Jan 30 13:22:30.434055 systemd[1]: Initializing machine ID from random generator. Jan 30 13:22:30.434062 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:22:30.434070 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:22:30.434080 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:22:30.434088 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:22:30.434096 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:22:30.434104 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:22:30.434112 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:22:30.434121 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:22:30.434131 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:22:30.434139 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:22:30.434146 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:22:30.434154 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:22:30.434161 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:22:30.434169 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:22:30.434177 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:22:30.434184 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:22:30.434192 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:22:30.434202 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:22:30.434210 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:22:30.434218 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:22:30.434225 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:22:30.434233 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:22:30.434241 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:22:30.434249 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:22:30.434257 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:22:30.434266 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:22:30.434274 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:22:30.434281 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:22:30.434289 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:22:30.434317 systemd-journald[218]: Collecting audit messages is disabled. Jan 30 13:22:30.434366 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:30.434376 systemd-journald[218]: Journal started Jan 30 13:22:30.434400 systemd-journald[218]: Runtime Journal (/run/log/journal/3afada14e3f14c159fbc868f4d170f63) is 8.0M, max 78.5M, 70.5M free. Jan 30 13:22:30.434799 systemd-modules-load[219]: Inserted module 'overlay' Jan 30 13:22:30.452317 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:22:30.453045 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:22:30.474053 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:22:30.510694 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:22:30.510723 kernel: Bridge firewalling registered Jan 30 13:22:30.502464 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:22:30.509739 systemd-modules-load[219]: Inserted module 'br_netfilter' Jan 30 13:22:30.515851 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:22:30.527604 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:30.558745 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:22:30.567807 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:22:30.602506 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:22:30.611595 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:22:30.635605 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:30.644120 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:22:30.658204 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:22:30.671859 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:22:30.701656 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:22:30.717598 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:22:30.739023 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:22:30.758717 dracut-cmdline[251]: dracut-dracut-053 Jan 30 13:22:30.758717 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:22:30.810592 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:22:30.815054 systemd-resolved[257]: Positive Trust Anchors: Jan 30 13:22:30.815065 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:22:30.815100 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:22:30.817473 systemd-resolved[257]: Defaulting to hostname 'linux'. Jan 30 13:22:30.821903 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:22:30.837228 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:22:30.969385 kernel: SCSI subsystem initialized Jan 30 13:22:30.978367 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:22:30.991364 kernel: iscsi: registered transport (tcp) Jan 30 13:22:31.015565 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:22:31.015639 kernel: QLogic iSCSI HBA Driver Jan 30 13:22:31.066660 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:22:31.085000 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:22:31.119509 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:22:31.119547 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:22:31.126368 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:22:31.175362 kernel: raid6: neonx8 gen() 15789 MB/s Jan 30 13:22:31.195350 kernel: raid6: neonx4 gen() 15808 MB/s Jan 30 13:22:31.215348 kernel: raid6: neonx2 gen() 13198 MB/s Jan 30 13:22:31.236353 kernel: raid6: neonx1 gen() 10502 MB/s Jan 30 13:22:31.256348 kernel: raid6: int64x8 gen() 6798 MB/s Jan 30 13:22:31.276349 kernel: raid6: int64x4 gen() 7347 MB/s Jan 30 13:22:31.297349 kernel: raid6: int64x2 gen() 6111 MB/s Jan 30 13:22:31.320589 kernel: raid6: int64x1 gen() 5061 MB/s Jan 30 13:22:31.320610 kernel: raid6: using algorithm neonx4 gen() 15808 MB/s Jan 30 13:22:31.346242 kernel: raid6: .... xor() 12123 MB/s, rmw enabled Jan 30 13:22:31.346255 kernel: raid6: using neon recovery algorithm Jan 30 13:22:31.358600 kernel: xor: measuring software checksum speed Jan 30 13:22:31.358619 kernel: 8regs : 21426 MB/sec Jan 30 13:22:31.362382 kernel: 32regs : 21647 MB/sec Jan 30 13:22:31.365808 kernel: arm64_neon : 27729 MB/sec Jan 30 13:22:31.370096 kernel: xor: using function: arm64_neon (27729 MB/sec) Jan 30 13:22:31.423360 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:22:31.434496 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:22:31.452557 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:22:31.487285 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 30 13:22:31.492919 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:22:31.523734 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:22:31.541794 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Jan 30 13:22:31.571799 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:22:31.591622 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:22:31.634690 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:22:31.656566 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:22:31.694250 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:22:31.710056 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:22:31.726628 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:22:31.743875 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:22:31.763840 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:22:31.776006 kernel: hv_vmbus: Vmbus version:5.3 Jan 30 13:22:31.786379 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 13:22:31.805397 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:22:31.852955 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 30 13:22:31.852986 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:22:31.852996 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 13:22:31.853165 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:22:31.853176 kernel: PTP clock support registered Jan 30 13:22:31.853185 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 13:22:31.805577 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:31.897136 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 30 13:22:31.897162 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 13:22:31.897181 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 13:22:31.897190 kernel: hv_vmbus: registering driver hv_utils Jan 30 13:22:31.840147 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:22:31.926551 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 13:22:31.926580 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 13:22:31.926589 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 13:22:31.926598 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 13:22:31.867415 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:22:31.953667 kernel: scsi host0: storvsc_host_t Jan 30 13:22:31.867653 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:31.991287 kernel: scsi host1: storvsc_host_t Jan 30 13:22:31.991527 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 13:22:31.991548 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 13:22:31.933491 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:31.953082 systemd-resolved[257]: Clock change detected. Flushing caches. Jan 30 13:22:32.019704 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:32.031865 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:22:32.062536 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:32.130975 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 30 13:22:32.142886 kernel: hv_netvsc 0022487b-0937-0022-487b-09370022487b eth0: VF slot 1 added Jan 30 13:22:32.143062 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:22:32.143098 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 30 13:22:32.079649 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:22:32.079977 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:32.176977 kernel: hv_vmbus: registering driver hv_pci Jan 30 13:22:32.177001 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 13:22:32.272038 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:22:32.272414 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:22:32.276381 kernel: hv_pci 0ec50c40-8cc6-489a-a6bb-a95e3d53d8b3: PCI VMBus probing: Using version 0x10004 Jan 30 13:22:32.361953 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 13:22:32.362122 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 13:22:32.362221 kernel: hv_pci 0ec50c40-8cc6-489a-a6bb-a95e3d53d8b3: PCI host bridge to bus 8cc6:00 Jan 30 13:22:32.362306 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:22:32.362317 kernel: pci_bus 8cc6:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 30 13:22:32.362413 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:22:32.362498 kernel: pci_bus 8cc6:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 13:22:32.362572 kernel: pci 8cc6:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 30 13:22:32.362667 kernel: pci 8cc6:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 13:22:32.362749 kernel: pci 8cc6:00:02.0: enabling Extended Tags Jan 30 13:22:32.362827 kernel: pci 8cc6:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8cc6:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 30 13:22:32.362907 kernel: pci_bus 8cc6:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 13:22:32.362982 kernel: pci 8cc6:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 13:22:32.099442 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:32.183826 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:32.251864 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:32.286404 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:22:32.360151 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:32.422493 kernel: mlx5_core 8cc6:00:02.0: enabling device (0000 -> 0002) Jan 30 13:22:32.750824 kernel: mlx5_core 8cc6:00:02.0: firmware version: 16.30.1284 Jan 30 13:22:32.750998 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (501) Jan 30 13:22:32.751009 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (485) Jan 30 13:22:32.751019 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:22:32.751037 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:22:32.751046 kernel: hv_netvsc 0022487b-0937-0022-487b-09370022487b eth0: VF registering: eth1 Jan 30 13:22:32.751164 kernel: mlx5_core 8cc6:00:02.0 eth1: joined to eth0 Jan 30 13:22:32.751263 kernel: mlx5_core 8cc6:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 30 13:22:32.485229 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 13:22:32.767589 kernel: mlx5_core 8cc6:00:02.0 enP36038s1: renamed from eth1 Jan 30 13:22:32.519828 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 13:22:32.556943 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:22:32.607945 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 13:22:32.615197 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 13:22:32.629302 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:22:33.667743 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:22:33.667805 disk-uuid[602]: The operation has completed successfully. Jan 30 13:22:33.730443 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:22:33.732138 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:22:33.765263 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:22:33.779387 sh[692]: Success Jan 30 13:22:33.815160 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 13:22:34.153958 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:22:34.174573 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:22:34.187511 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:22:34.228429 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 30 13:22:34.228496 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:22:34.243131 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:22:34.243179 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:22:34.250518 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:22:34.754236 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:22:34.761624 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:22:34.791510 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:22:34.802534 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:22:34.863738 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:34.863761 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:22:34.863770 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:22:34.916190 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:22:34.939071 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:22:34.945066 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:34.952927 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:22:34.970430 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:22:35.003083 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:22:35.026309 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:22:35.051097 systemd-networkd[876]: lo: Link UP Jan 30 13:22:35.051239 systemd-networkd[876]: lo: Gained carrier Jan 30 13:22:35.052887 systemd-networkd[876]: Enumeration completed Jan 30 13:22:35.055798 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:22:35.056417 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:35.056421 systemd-networkd[876]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:35.069187 systemd[1]: Reached target network.target - Network. Jan 30 13:22:35.165133 kernel: mlx5_core 8cc6:00:02.0 enP36038s1: Link up Jan 30 13:22:35.213136 kernel: hv_netvsc 0022487b-0937-0022-487b-09370022487b eth0: Data path switched to VF: enP36038s1 Jan 30 13:22:35.213849 systemd-networkd[876]: enP36038s1: Link UP Jan 30 13:22:35.214103 systemd-networkd[876]: eth0: Link UP Jan 30 13:22:35.214520 systemd-networkd[876]: eth0: Gained carrier Jan 30 13:22:35.214532 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:35.244819 systemd-networkd[876]: enP36038s1: Gained carrier Jan 30 13:22:35.263185 systemd-networkd[876]: eth0: DHCPv4 address 10.200.20.42/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 13:22:36.325438 ignition[844]: Ignition 2.20.0 Jan 30 13:22:36.325449 ignition[844]: Stage: fetch-offline Jan 30 13:22:36.330965 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:22:36.325489 ignition[844]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:36.325498 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:22:36.325597 ignition[844]: parsed url from cmdline: "" Jan 30 13:22:36.361411 systemd-networkd[876]: eth0: Gained IPv6LL Jan 30 13:22:36.325600 ignition[844]: no config URL provided Jan 30 13:22:36.365421 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:22:36.325604 ignition[844]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:22:36.325611 ignition[844]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:22:36.325617 ignition[844]: failed to fetch config: resource requires networking Jan 30 13:22:36.325810 ignition[844]: Ignition finished successfully Jan 30 13:22:36.390269 ignition[887]: Ignition 2.20.0 Jan 30 13:22:36.390278 ignition[887]: Stage: fetch Jan 30 13:22:36.390505 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:36.390516 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:22:36.390623 ignition[887]: parsed url from cmdline: "" Jan 30 13:22:36.390627 ignition[887]: no config URL provided Jan 30 13:22:36.390631 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:22:36.390639 ignition[887]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:22:36.390668 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 13:22:36.556247 ignition[887]: GET result: OK Jan 30 13:22:36.556340 ignition[887]: config has been read from IMDS userdata Jan 30 13:22:36.556395 ignition[887]: parsing config with SHA512: e5a6dc2e3dca1d58db79274419554ff9247d6ff28e868cfea2457fdd295d10272f7be5d157e65969ff9b742f58362e09e8d090027318ae6ecdf64d1ef20447a0 Jan 30 13:22:36.561221 unknown[887]: fetched base config from "system" Jan 30 13:22:36.561229 unknown[887]: fetched base config from "system" Jan 30 13:22:36.566407 ignition[887]: fetch: fetch complete Jan 30 13:22:36.561234 unknown[887]: fetched user config from "azure" Jan 30 13:22:36.566414 ignition[887]: fetch: fetch passed Jan 30 13:22:36.573023 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:22:36.566481 ignition[887]: Ignition finished successfully Jan 30 13:22:36.599356 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:22:36.620098 ignition[893]: Ignition 2.20.0 Jan 30 13:22:36.623507 ignition[893]: Stage: kargs Jan 30 13:22:36.629536 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:22:36.623764 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:36.623776 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:22:36.624782 ignition[893]: kargs: kargs passed Jan 30 13:22:36.624833 ignition[893]: Ignition finished successfully Jan 30 13:22:36.663632 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:22:36.685881 ignition[899]: Ignition 2.20.0 Jan 30 13:22:36.685893 ignition[899]: Stage: disks Jan 30 13:22:36.691029 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:22:36.686065 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:36.698349 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:22:36.686075 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:22:36.709125 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:22:36.687046 ignition[899]: disks: disks passed Jan 30 13:22:36.723437 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:22:36.687095 ignition[899]: Ignition finished successfully Jan 30 13:22:36.736349 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:22:36.749815 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:22:36.782392 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:22:36.818496 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 13:22:36.827273 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:22:36.846445 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:22:36.906134 kernel: EXT4-fs (sda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 30 13:22:36.906596 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:22:36.912320 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:22:36.935390 systemd-networkd[876]: enP36038s1: Gained IPv6LL Jan 30 13:22:36.941359 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:22:36.954857 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:22:36.974634 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (919) Jan 30 13:22:36.984006 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:22:37.012625 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:37.012653 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:22:37.012663 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:22:36.999331 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:22:36.999371 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:22:37.021588 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:22:37.060795 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:22:37.075440 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:22:37.070178 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:22:37.736609 coreos-metadata[921]: Jan 30 13:22:37.736 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:22:37.757527 coreos-metadata[921]: Jan 30 13:22:37.757 INFO Fetch successful Jan 30 13:22:37.765715 coreos-metadata[921]: Jan 30 13:22:37.765 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:22:37.793956 coreos-metadata[921]: Jan 30 13:22:37.793 INFO Fetch successful Jan 30 13:22:37.800575 coreos-metadata[921]: Jan 30 13:22:37.800 INFO wrote hostname ci-4186.1.0-a-4db8cd7df2 to /sysroot/etc/hostname Jan 30 13:22:37.810980 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:22:38.126144 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:22:38.231343 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:22:38.243183 initrd-setup-root[964]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:22:38.255644 initrd-setup-root[971]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:22:39.570926 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:22:39.587623 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:22:39.600380 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:22:39.622499 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:39.617288 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:22:39.652145 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:22:39.671149 ignition[1039]: INFO : Ignition 2.20.0 Jan 30 13:22:39.671149 ignition[1039]: INFO : Stage: mount Jan 30 13:22:39.671149 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:39.671149 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:22:39.702770 ignition[1039]: INFO : mount: mount passed Jan 30 13:22:39.702770 ignition[1039]: INFO : Ignition finished successfully Jan 30 13:22:39.681173 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:22:39.707343 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:22:39.729443 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:22:39.756135 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1050) Jan 30 13:22:39.770812 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:39.770868 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:22:39.775328 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:22:39.783128 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:22:39.784730 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:22:39.814756 ignition[1067]: INFO : Ignition 2.20.0 Jan 30 13:22:39.814756 ignition[1067]: INFO : Stage: files Jan 30 13:22:39.824072 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:39.824072 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:22:39.824072 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:22:39.824072 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:22:39.824072 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:22:39.924100 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:22:39.932267 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:22:39.932267 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:22:39.929584 unknown[1067]: wrote ssh authorized keys file for user: core Jan 30 13:22:39.976178 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:22:39.989032 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 13:22:40.010126 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:22:40.100275 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:22:40.100275 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:22:40.123237 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 30 13:22:40.567867 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:22:40.634484 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:22:40.645866 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:22:40.645866 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:22:40.645866 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:22:40.645866 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:22:40.645866 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:22:40.704055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:22:40.704055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:22:40.704055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:22:40.704055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:22:40.704055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:22:40.704055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:22:40.704055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:22:40.704055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:22:40.704055 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 30 13:22:41.064035 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:22:41.246218 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:22:41.246218 ignition[1067]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:22:41.271043 ignition[1067]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:22:41.285442 ignition[1067]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:22:41.285442 ignition[1067]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:22:41.285442 ignition[1067]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:22:41.285442 ignition[1067]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:22:41.285442 ignition[1067]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:22:41.285442 ignition[1067]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:22:41.285442 ignition[1067]: INFO : files: files passed Jan 30 13:22:41.285442 ignition[1067]: INFO : Ignition finished successfully Jan 30 13:22:41.284255 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:22:41.331392 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:22:41.351330 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:22:41.379731 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:22:41.432291 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:22:41.432291 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:22:41.379830 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:22:41.461726 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:22:41.398134 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:22:41.411411 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:22:41.454317 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:22:41.511234 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:22:41.511372 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:22:41.526305 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:22:41.543821 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:22:41.559148 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:22:41.581432 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:22:41.609756 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:22:41.632436 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:22:41.658068 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:22:41.676721 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:22:41.698685 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:22:41.707082 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:22:41.707311 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:22:41.734195 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:22:41.742985 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:22:41.758696 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:22:41.774298 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:22:41.787987 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:22:41.802165 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:22:41.814353 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:22:41.827551 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:22:41.839146 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:22:41.851189 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:22:41.860904 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:22:41.861080 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:22:41.877390 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:22:41.888407 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:22:41.900641 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:22:41.900751 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:22:41.914133 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:22:41.914304 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:22:41.931280 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:22:41.931468 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:22:41.947320 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:22:41.947471 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:22:41.958080 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:22:41.958253 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:22:42.021166 ignition[1120]: INFO : Ignition 2.20.0 Jan 30 13:22:42.021166 ignition[1120]: INFO : Stage: umount Jan 30 13:22:42.021166 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:42.021166 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:22:42.021166 ignition[1120]: INFO : umount: umount passed Jan 30 13:22:42.021166 ignition[1120]: INFO : Ignition finished successfully Jan 30 13:22:41.991271 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:22:42.006200 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:22:42.006430 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:22:42.023509 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:22:42.029753 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:22:42.029938 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:22:42.039401 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:22:42.039519 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:22:42.053500 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:22:42.053606 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:22:42.061931 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:22:42.062045 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:22:42.082791 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:22:42.082857 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:22:42.097490 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:22:42.097542 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:22:42.108073 systemd[1]: Stopped target network.target - Network. Jan 30 13:22:42.124447 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:22:42.124523 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:22:42.139117 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:22:42.144241 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:22:42.151134 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:22:42.164743 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:22:42.175879 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:22:42.186314 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:22:42.186374 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:22:42.200386 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:22:42.200442 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:22:42.212526 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:22:42.212591 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:22:42.223356 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:22:42.223401 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:22:42.234768 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:22:42.246081 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:22:42.251944 systemd-networkd[876]: eth0: DHCPv6 lease lost Jan 30 13:22:42.266860 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:22:42.267524 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:22:42.267651 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:22:42.277720 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:22:42.277816 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:22:42.290667 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:22:42.532360 kernel: hv_netvsc 0022487b-0937-0022-487b-09370022487b eth0: Data path switched from VF: enP36038s1 Jan 30 13:22:42.292876 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:22:42.305734 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:22:42.305811 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:22:42.326353 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:22:42.337812 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:22:42.337901 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:22:42.352661 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:22:42.352724 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:22:42.363005 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:22:42.363058 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:22:42.374188 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:22:42.374241 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:22:42.385402 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:22:42.428216 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:22:42.428415 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:22:42.441718 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:22:42.441766 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:22:42.455951 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:22:42.455990 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:22:42.467961 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:22:42.468018 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:22:42.483677 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:22:42.483742 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:22:42.501212 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:22:42.501279 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:42.551398 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:22:42.566196 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:22:42.566303 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:22:42.579168 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:22:42.579231 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:42.593165 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:22:42.593286 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:22:42.603591 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:22:42.603688 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:22:43.845206 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:22:43.845310 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:22:43.851000 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:22:43.862184 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:22:43.862254 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:22:43.890382 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:22:43.901454 systemd[1]: Switching root. Jan 30 13:22:44.220689 systemd-journald[218]: Journal stopped Jan 30 13:22:51.297103 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jan 30 13:22:51.297152 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:22:51.297164 kernel: SELinux: policy capability open_perms=1 Jan 30 13:22:51.297174 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:22:51.297181 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:22:51.297189 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:22:51.297197 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:22:51.297205 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:22:51.297213 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:22:51.297221 kernel: audit: type=1403 audit(1738243367.530:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:22:51.297231 systemd[1]: Successfully loaded SELinux policy in 92.196ms. Jan 30 13:22:51.297240 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.312ms. Jan 30 13:22:51.297250 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:22:51.297259 systemd[1]: Detected virtualization microsoft. Jan 30 13:22:51.297268 systemd[1]: Detected architecture arm64. Jan 30 13:22:51.297278 systemd[1]: Detected first boot. Jan 30 13:22:51.297287 systemd[1]: Hostname set to . Jan 30 13:22:51.297298 systemd[1]: Initializing machine ID from random generator. Jan 30 13:22:51.297307 zram_generator::config[1163]: No configuration found. Jan 30 13:22:51.297316 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:22:51.297325 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:22:51.297335 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:22:51.297344 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:22:51.297353 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:22:51.297362 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:22:51.297371 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:22:51.297380 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:22:51.297389 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:22:51.297400 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:22:51.297409 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:22:51.297418 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:22:51.297426 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:22:51.297436 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:22:51.297445 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:22:51.297454 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:22:51.297463 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:22:51.297472 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:22:51.297482 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 13:22:51.297492 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:22:51.297501 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:22:51.297513 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:22:51.297522 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:22:51.297531 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:22:51.297540 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:22:51.297551 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:22:51.297560 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:22:51.297569 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:22:51.297578 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:22:51.297587 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:22:51.297596 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:22:51.297605 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:22:51.297618 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:22:51.297627 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:22:51.297636 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:22:51.297645 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:22:51.297654 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:22:51.297663 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:22:51.297674 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:22:51.297683 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:22:51.297694 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:22:51.297703 systemd[1]: Reached target machines.target - Containers. Jan 30 13:22:51.297713 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:22:51.297722 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:22:51.297731 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:22:51.297741 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:22:51.297751 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:22:51.297761 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:22:51.297770 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:22:51.297779 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:22:51.297789 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:22:51.297798 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:22:51.297807 kernel: fuse: init (API version 7.39) Jan 30 13:22:51.297816 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:22:51.297825 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:22:51.297836 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:22:51.297845 kernel: ACPI: bus type drm_connector registered Jan 30 13:22:51.297853 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:22:51.297862 kernel: loop: module loaded Jan 30 13:22:51.297870 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:22:51.297896 systemd-journald[1259]: Collecting audit messages is disabled. Jan 30 13:22:51.297920 systemd-journald[1259]: Journal started Jan 30 13:22:51.297944 systemd-journald[1259]: Runtime Journal (/run/log/journal/876c6f84ae2048e0aaba56e7b018bea4) is 8.0M, max 78.5M, 70.5M free. Jan 30 13:22:50.278564 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:22:50.484626 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 13:22:50.485029 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:22:50.485373 systemd[1]: systemd-journald.service: Consumed 3.697s CPU time. Jan 30 13:22:51.515826 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:22:51.542849 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:22:51.562740 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:22:51.577743 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:22:51.587914 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:22:51.587989 systemd[1]: Stopped verity-setup.service. Jan 30 13:22:51.605597 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:22:51.606427 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:22:51.612883 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:22:51.619624 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:22:51.626064 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:22:51.633963 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:22:51.642378 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:22:51.649897 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:22:51.659318 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:22:51.669194 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:22:51.669411 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:22:51.680053 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:22:51.680287 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:22:51.689724 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:22:51.689893 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:22:51.698891 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:22:51.699054 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:22:51.709803 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:22:51.709956 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:22:51.719525 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:22:51.719655 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:22:51.728864 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:22:51.736933 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:22:51.745754 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:22:51.754068 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:22:51.771005 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:22:51.783220 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:22:51.795329 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:22:51.803370 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:22:51.803416 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:22:51.810663 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:22:51.826289 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:22:51.834300 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:22:51.843392 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:22:51.856924 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:22:51.865516 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:22:51.872165 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:22:51.873680 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:22:51.880084 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:22:51.882529 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:22:51.894418 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:22:51.909409 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:22:51.925228 systemd-journald[1259]: Time spent on flushing to /var/log/journal/876c6f84ae2048e0aaba56e7b018bea4 is 47.285ms for 904 entries. Jan 30 13:22:51.925228 systemd-journald[1259]: System Journal (/var/log/journal/876c6f84ae2048e0aaba56e7b018bea4) is 8.0M, max 2.6G, 2.6G free. Jan 30 13:22:52.993173 systemd-journald[1259]: Received client request to flush runtime journal. Jan 30 13:22:52.993249 kernel: loop0: detected capacity change from 0 to 189592 Jan 30 13:22:52.993270 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:22:52.993286 kernel: loop1: detected capacity change from 0 to 113552 Jan 30 13:22:51.938200 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:22:51.947098 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:22:51.955873 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:22:51.968216 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:22:51.977870 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:22:52.002513 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:22:52.011256 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:22:52.024419 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:22:52.032410 udevadm[1300]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:22:52.995200 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:22:53.105594 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:22:53.106617 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:22:53.454922 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:22:53.471316 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:22:53.503160 kernel: loop2: detected capacity change from 0 to 28752 Jan 30 13:22:53.521363 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Jan 30 13:22:53.521378 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Jan 30 13:22:53.527709 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:22:54.030448 kernel: loop3: detected capacity change from 0 to 116784 Jan 30 13:22:54.508137 kernel: loop4: detected capacity change from 0 to 189592 Jan 30 13:22:54.525141 kernel: loop5: detected capacity change from 0 to 113552 Jan 30 13:22:54.537217 kernel: loop6: detected capacity change from 0 to 28752 Jan 30 13:22:54.548560 kernel: loop7: detected capacity change from 0 to 116784 Jan 30 13:22:54.553027 (sd-merge)[1321]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 30 13:22:54.553869 (sd-merge)[1321]: Merged extensions into '/usr'. Jan 30 13:22:54.559151 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:22:54.559435 systemd[1]: Reloading... Jan 30 13:22:54.629258 zram_generator::config[1343]: No configuration found. Jan 30 13:22:55.433256 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:22:55.493760 systemd[1]: Reloading finished in 933 ms. Jan 30 13:22:55.521963 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:22:55.529156 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:22:55.544340 systemd[1]: Starting ensure-sysext.service... Jan 30 13:22:55.552349 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:22:55.562382 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:22:55.583738 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:22:55.583968 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:22:55.584668 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:22:55.584886 systemd-tmpfiles[1404]: ACLs are not supported, ignoring. Jan 30 13:22:55.584938 systemd-tmpfiles[1404]: ACLs are not supported, ignoring. Jan 30 13:22:55.592504 systemd-udevd[1405]: Using default interface naming scheme 'v255'. Jan 30 13:22:55.850358 systemd[1]: Reloading requested from client PID 1403 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:22:55.850375 systemd[1]: Reloading... Jan 30 13:22:55.909456 systemd-tmpfiles[1404]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:22:55.909470 systemd-tmpfiles[1404]: Skipping /boot Jan 30 13:22:55.923548 systemd-tmpfiles[1404]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:22:55.923568 systemd-tmpfiles[1404]: Skipping /boot Jan 30 13:22:55.930136 zram_generator::config[1434]: No configuration found. Jan 30 13:22:56.047798 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:22:56.143958 systemd[1]: Reloading finished in 293 ms. Jan 30 13:22:56.158441 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:22:56.180140 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:22:56.216520 systemd[1]: Finished ensure-sysext.service. Jan 30 13:22:56.225565 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 13:22:56.227616 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Jan 30 13:22:56.243321 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:22:56.258624 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:22:56.265441 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:22:56.267368 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:22:56.274317 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:22:56.285393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:22:56.294281 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:22:56.300373 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:22:56.302731 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:22:56.313430 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:22:56.325171 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:22:56.331701 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:22:56.338476 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:22:56.345429 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:22:56.347157 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:22:56.355440 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:22:56.355648 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:22:56.363802 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:22:56.363970 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:22:56.373346 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:22:56.375066 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:22:56.383477 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Jan 30 13:22:56.395010 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:22:56.395085 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:22:56.403606 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:22:56.410164 kernel: hv_vmbus: registering driver hyperv_fb Jan 30 13:22:56.420401 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:56.433655 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 30 13:22:56.433689 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 30 13:22:56.447243 kernel: Console: switching to colour dummy device 80x25 Jan 30 13:22:56.434992 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:22:56.458835 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:22:56.463690 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:22:56.483138 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:22:56.509314 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:22:56.510282 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:56.523271 kernel: hv_vmbus: registering driver hv_balloon Jan 30 13:22:56.523370 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 30 13:22:56.527360 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 30 13:22:56.528998 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:22:56.549564 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:56.589526 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1502) Jan 30 13:22:56.665855 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:22:56.677329 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:22:56.874030 systemd-networkd[1535]: lo: Link UP Jan 30 13:22:56.874037 systemd-networkd[1535]: lo: Gained carrier Jan 30 13:22:56.875974 systemd-networkd[1535]: Enumeration completed Jan 30 13:22:56.876138 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:22:56.876453 systemd-networkd[1535]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:56.876456 systemd-networkd[1535]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:56.890795 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:22:56.929129 kernel: mlx5_core 8cc6:00:02.0 enP36038s1: Link up Jan 30 13:22:56.953493 systemd-resolved[1536]: Positive Trust Anchors: Jan 30 13:22:56.953513 systemd-resolved[1536]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:22:56.953544 systemd-resolved[1536]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:22:56.956493 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:22:56.959827 systemd-resolved[1536]: Using system hostname 'ci-4186.1.0-a-4db8cd7df2'. Jan 30 13:22:56.974213 kernel: hv_netvsc 0022487b-0937-0022-487b-09370022487b eth0: Data path switched to VF: enP36038s1 Jan 30 13:22:56.978646 systemd-networkd[1535]: enP36038s1: Link UP Jan 30 13:22:56.979078 systemd-networkd[1535]: eth0: Link UP Jan 30 13:22:56.979081 systemd-networkd[1535]: eth0: Gained carrier Jan 30 13:22:56.979097 systemd-networkd[1535]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:56.982404 augenrules[1642]: No rules Jan 30 13:22:56.985271 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:22:56.986155 systemd-networkd[1535]: enP36038s1: Gained carrier Jan 30 13:22:56.994342 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:22:56.996176 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:22:57.003038 systemd[1]: Reached target network.target - Network. Jan 30 13:22:57.008821 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:22:57.016202 systemd-networkd[1535]: eth0: DHCPv4 address 10.200.20.42/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 13:22:57.017372 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:22:57.030305 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:22:57.096395 lvm[1650]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:22:57.125062 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:22:57.133679 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:22:57.153292 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:22:57.164745 lvm[1652]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:22:57.193148 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:22:57.526257 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:22:57.533809 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:57.541523 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:22:58.503273 systemd-networkd[1535]: eth0: Gained IPv6LL Jan 30 13:22:58.507367 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:22:58.515902 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:22:58.887362 systemd-networkd[1535]: enP36038s1: Gained IPv6LL Jan 30 13:22:58.997296 ldconfig[1292]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:22:59.009090 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:22:59.025352 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:22:59.046551 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:22:59.054872 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:22:59.062479 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:22:59.072186 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:22:59.081352 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:22:59.087369 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:22:59.094302 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:22:59.101475 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:22:59.101514 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:22:59.106495 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:22:59.113336 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:22:59.121101 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:22:59.135259 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:22:59.141376 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:22:59.147499 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:22:59.152653 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:22:59.158507 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:22:59.158531 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:22:59.171229 systemd[1]: Starting chronyd.service - NTP client/server... Jan 30 13:22:59.179284 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:22:59.194364 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:22:59.200939 (chronyd)[1664]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 30 13:22:59.210315 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:22:59.213269 chronyd[1669]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 30 13:22:59.219341 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:22:59.221420 chronyd[1669]: Timezone right/UTC failed leap second check, ignoring Jan 30 13:22:59.227212 chronyd[1669]: Loaded seccomp filter (level 2) Jan 30 13:22:59.237571 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:22:59.243875 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:22:59.243918 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 30 13:22:59.247487 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 30 13:22:59.254285 jq[1670]: false Jan 30 13:22:59.256792 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 30 13:22:59.258829 KVP[1675]: KVP starting; pid is:1675 Jan 30 13:22:59.259387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:22:59.274052 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:22:59.280481 dbus-daemon[1667]: [system] SELinux support is enabled Jan 30 13:22:59.292402 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:22:59.298686 extend-filesystems[1674]: Found loop4 Jan 30 13:22:59.298686 extend-filesystems[1674]: Found loop5 Jan 30 13:22:59.298686 extend-filesystems[1674]: Found loop6 Jan 30 13:22:59.298686 extend-filesystems[1674]: Found loop7 Jan 30 13:22:59.298686 extend-filesystems[1674]: Found sda Jan 30 13:22:59.298686 extend-filesystems[1674]: Found sda1 Jan 30 13:22:59.298686 extend-filesystems[1674]: Found sda2 Jan 30 13:22:59.298686 extend-filesystems[1674]: Found sda3 Jan 30 13:22:59.298686 extend-filesystems[1674]: Found usr Jan 30 13:22:59.298686 extend-filesystems[1674]: Found sda4 Jan 30 13:22:59.428618 kernel: hv_utils: KVP IC version 4.0 Jan 30 13:22:59.428741 coreos-metadata[1666]: Jan 30 13:22:59.356 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:22:59.428741 coreos-metadata[1666]: Jan 30 13:22:59.359 INFO Fetch successful Jan 30 13:22:59.428741 coreos-metadata[1666]: Jan 30 13:22:59.359 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 30 13:22:59.428741 coreos-metadata[1666]: Jan 30 13:22:59.369 INFO Fetch successful Jan 30 13:22:59.428741 coreos-metadata[1666]: Jan 30 13:22:59.369 INFO Fetching http://168.63.129.16/machine/6fd4ecc7-db92-46b7-a491-e2c2f9357cb0/3ba0159a%2Da1f1%2D4c19%2Dada0%2Ded8129ce81ab.%5Fci%2D4186.1.0%2Da%2D4db8cd7df2?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 30 13:22:59.428741 coreos-metadata[1666]: Jan 30 13:22:59.378 INFO Fetch successful Jan 30 13:22:59.428741 coreos-metadata[1666]: Jan 30 13:22:59.378 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:22:59.428741 coreos-metadata[1666]: Jan 30 13:22:59.395 INFO Fetch successful Jan 30 13:22:59.342856 KVP[1675]: KVP LIC Version: 3.1 Jan 30 13:22:59.302322 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:22:59.429336 extend-filesystems[1674]: Found sda6 Jan 30 13:22:59.429336 extend-filesystems[1674]: Found sda7 Jan 30 13:22:59.429336 extend-filesystems[1674]: Found sda9 Jan 30 13:22:59.429336 extend-filesystems[1674]: Checking size of /dev/sda9 Jan 30 13:22:59.429336 extend-filesystems[1674]: Old size kept for /dev/sda9 Jan 30 13:22:59.429336 extend-filesystems[1674]: Found sr0 Jan 30 13:22:59.328410 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:22:59.351483 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:22:59.377415 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:22:59.392288 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:22:59.392811 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:22:59.543444 update_engine[1703]: I20250130 13:22:59.463541 1703 main.cc:92] Flatcar Update Engine starting Jan 30 13:22:59.543444 update_engine[1703]: I20250130 13:22:59.483497 1703 update_check_scheduler.cc:74] Next update check in 3m38s Jan 30 13:22:59.407867 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:22:59.543758 jq[1706]: true Jan 30 13:22:59.421938 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:22:59.433691 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:22:59.447714 systemd[1]: Started chronyd.service - NTP client/server. Jan 30 13:22:59.470675 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:22:59.470862 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:22:59.471160 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:22:59.471312 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:22:59.487563 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:22:59.487724 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:22:59.515801 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:22:59.521588 systemd-logind[1699]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 30 13:22:59.522492 systemd-logind[1699]: New seat seat0. Jan 30 13:22:59.551650 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:22:59.558331 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1502) Jan 30 13:22:59.568579 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:22:59.571366 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:22:59.624175 jq[1728]: true Jan 30 13:22:59.629485 (ntainerd)[1738]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:22:59.637006 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:22:59.637061 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:22:59.637971 dbus-daemon[1667]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 13:22:59.651211 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:22:59.651240 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:22:59.660955 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:22:59.674739 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:22:59.687552 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:22:59.707982 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:22:59.738217 tar[1721]: linux-arm64/helm Jan 30 13:22:59.805676 bash[1792]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:22:59.808529 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:22:59.820590 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:22:59.913638 locksmithd[1771]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:23:00.053992 containerd[1738]: time="2025-01-30T13:23:00.053885260Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:23:00.123119 containerd[1738]: time="2025-01-30T13:23:00.121156900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:23:00.127601 containerd[1738]: time="2025-01-30T13:23:00.127546820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:23:00.127601 containerd[1738]: time="2025-01-30T13:23:00.127595340Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:23:00.127694 containerd[1738]: time="2025-01-30T13:23:00.127616340Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:23:00.127796 containerd[1738]: time="2025-01-30T13:23:00.127772620Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:23:00.127820 containerd[1738]: time="2025-01-30T13:23:00.127796300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:23:00.127881 containerd[1738]: time="2025-01-30T13:23:00.127860900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:23:00.127881 containerd[1738]: time="2025-01-30T13:23:00.127877620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:23:00.128082 containerd[1738]: time="2025-01-30T13:23:00.128057660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:23:00.128082 containerd[1738]: time="2025-01-30T13:23:00.128078540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:23:00.128160 containerd[1738]: time="2025-01-30T13:23:00.128093900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:23:00.128160 containerd[1738]: time="2025-01-30T13:23:00.128102980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:23:00.128238 containerd[1738]: time="2025-01-30T13:23:00.128217220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:23:00.128452 containerd[1738]: time="2025-01-30T13:23:00.128429820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:23:00.128564 containerd[1738]: time="2025-01-30T13:23:00.128540740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:23:00.128564 containerd[1738]: time="2025-01-30T13:23:00.128560820Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:23:00.128655 containerd[1738]: time="2025-01-30T13:23:00.128636260Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:23:00.128707 containerd[1738]: time="2025-01-30T13:23:00.128690340Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:23:00.160770 containerd[1738]: time="2025-01-30T13:23:00.160715940Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:23:00.160868 containerd[1738]: time="2025-01-30T13:23:00.160793020Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:23:00.160868 containerd[1738]: time="2025-01-30T13:23:00.160810700Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:23:00.160868 containerd[1738]: time="2025-01-30T13:23:00.160828700Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:23:00.160868 containerd[1738]: time="2025-01-30T13:23:00.160844060Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:23:00.161056 containerd[1738]: time="2025-01-30T13:23:00.161034260Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:23:00.161325 containerd[1738]: time="2025-01-30T13:23:00.161304380Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:23:00.161439 containerd[1738]: time="2025-01-30T13:23:00.161418180Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:23:00.161439 containerd[1738]: time="2025-01-30T13:23:00.161445700Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:23:00.161497 containerd[1738]: time="2025-01-30T13:23:00.161463260Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:23:00.161497 containerd[1738]: time="2025-01-30T13:23:00.161478500Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:23:00.161497 containerd[1738]: time="2025-01-30T13:23:00.161492060Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:23:00.161569 containerd[1738]: time="2025-01-30T13:23:00.161505340Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:23:00.161569 containerd[1738]: time="2025-01-30T13:23:00.161519660Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:23:00.161569 containerd[1738]: time="2025-01-30T13:23:00.161535300Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:23:00.161569 containerd[1738]: time="2025-01-30T13:23:00.161547740Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:23:00.161569 containerd[1738]: time="2025-01-30T13:23:00.161560020Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:23:00.161657 containerd[1738]: time="2025-01-30T13:23:00.161574060Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:23:00.161657 containerd[1738]: time="2025-01-30T13:23:00.161595260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:23:00.161657 containerd[1738]: time="2025-01-30T13:23:00.161608460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:23:00.161657 containerd[1738]: time="2025-01-30T13:23:00.161620380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:23:00.161657 containerd[1738]: time="2025-01-30T13:23:00.161633140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:23:00.161657 containerd[1738]: time="2025-01-30T13:23:00.161645220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:23:00.161776 containerd[1738]: time="2025-01-30T13:23:00.161659660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:23:00.161776 containerd[1738]: time="2025-01-30T13:23:00.161671940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:23:00.161776 containerd[1738]: time="2025-01-30T13:23:00.161687460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:23:00.161776 containerd[1738]: time="2025-01-30T13:23:00.161699580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:23:00.161776 containerd[1738]: time="2025-01-30T13:23:00.161713300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:23:00.161776 containerd[1738]: time="2025-01-30T13:23:00.161725180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:23:00.161776 containerd[1738]: time="2025-01-30T13:23:00.161736060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:23:00.161776 containerd[1738]: time="2025-01-30T13:23:00.161747500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:23:00.161776 containerd[1738]: time="2025-01-30T13:23:00.161763060Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:23:00.162314 containerd[1738]: time="2025-01-30T13:23:00.161788020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:23:00.162314 containerd[1738]: time="2025-01-30T13:23:00.161800940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:23:00.162314 containerd[1738]: time="2025-01-30T13:23:00.161811460Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:23:00.162314 containerd[1738]: time="2025-01-30T13:23:00.161863180Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:23:00.162314 containerd[1738]: time="2025-01-30T13:23:00.161882340Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:23:00.162314 containerd[1738]: time="2025-01-30T13:23:00.161892860Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:23:00.162314 containerd[1738]: time="2025-01-30T13:23:00.161904780Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:23:00.162314 containerd[1738]: time="2025-01-30T13:23:00.161913980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:23:00.162314 containerd[1738]: time="2025-01-30T13:23:00.161925500Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:23:00.162314 containerd[1738]: time="2025-01-30T13:23:00.161935820Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:23:00.162314 containerd[1738]: time="2025-01-30T13:23:00.161945420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:23:00.162544 containerd[1738]: time="2025-01-30T13:23:00.162251900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:23:00.162544 containerd[1738]: time="2025-01-30T13:23:00.162300140Z" level=info msg="Connect containerd service" Jan 30 13:23:00.162544 containerd[1738]: time="2025-01-30T13:23:00.162336220Z" level=info msg="using legacy CRI server" Jan 30 13:23:00.162544 containerd[1738]: time="2025-01-30T13:23:00.162343860Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:23:00.162544 containerd[1738]: time="2025-01-30T13:23:00.162471820Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:23:00.169200 containerd[1738]: time="2025-01-30T13:23:00.169042740Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:23:00.169478 containerd[1738]: time="2025-01-30T13:23:00.169412740Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:23:00.169478 containerd[1738]: time="2025-01-30T13:23:00.169450540Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:23:00.169625 containerd[1738]: time="2025-01-30T13:23:00.169490180Z" level=info msg="Start subscribing containerd event" Jan 30 13:23:00.169625 containerd[1738]: time="2025-01-30T13:23:00.169529620Z" level=info msg="Start recovering state" Jan 30 13:23:00.169625 containerd[1738]: time="2025-01-30T13:23:00.169603340Z" level=info msg="Start event monitor" Jan 30 13:23:00.169625 containerd[1738]: time="2025-01-30T13:23:00.169622700Z" level=info msg="Start snapshots syncer" Jan 30 13:23:00.169718 containerd[1738]: time="2025-01-30T13:23:00.169634180Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:23:00.169718 containerd[1738]: time="2025-01-30T13:23:00.169642420Z" level=info msg="Start streaming server" Jan 30 13:23:00.169804 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:23:00.183278 containerd[1738]: time="2025-01-30T13:23:00.183229140Z" level=info msg="containerd successfully booted in 0.133910s" Jan 30 13:23:00.390359 tar[1721]: linux-arm64/LICENSE Jan 30 13:23:00.390421 tar[1721]: linux-arm64/README.md Jan 30 13:23:00.403874 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:23:00.473227 sshd_keygen[1710]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:23:00.496701 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:23:00.508910 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:23:00.520419 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 30 13:23:00.528701 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:23:00.528875 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:23:00.544722 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:23:00.561308 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 30 13:23:00.570146 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:23:00.584433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:23:00.592591 (kubelet)[1837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:23:00.607645 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:23:00.626998 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 13:23:00.638917 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:23:00.645511 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:23:00.652849 systemd[1]: Startup finished in 753ms (kernel) + 17.653s (initrd) + 13.213s (userspace) = 31.620s. Jan 30 13:23:00.695472 agetty[1843]: failed to open credentials directory Jan 30 13:23:00.696731 agetty[1842]: failed to open credentials directory Jan 30 13:23:00.809379 login[1842]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:23:00.812359 login[1843]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:23:00.823156 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:23:00.829962 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:23:00.836389 systemd-logind[1699]: New session 2 of user core. Jan 30 13:23:00.843817 systemd-logind[1699]: New session 1 of user core. Jan 30 13:23:00.851448 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:23:00.860634 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:23:00.877884 (systemd)[1854]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:23:01.038606 systemd[1854]: Queued start job for default target default.target. Jan 30 13:23:01.048339 systemd[1854]: Created slice app.slice - User Application Slice. Jan 30 13:23:01.048470 systemd[1854]: Reached target paths.target - Paths. Jan 30 13:23:01.048486 systemd[1854]: Reached target timers.target - Timers. Jan 30 13:23:01.051321 systemd[1854]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:23:01.063403 systemd[1854]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:23:01.063524 systemd[1854]: Reached target sockets.target - Sockets. Jan 30 13:23:01.063538 systemd[1854]: Reached target basic.target - Basic System. Jan 30 13:23:01.063860 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:23:01.064492 systemd[1854]: Reached target default.target - Main User Target. Jan 30 13:23:01.064544 systemd[1854]: Startup finished in 174ms. Jan 30 13:23:01.070376 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:23:01.072072 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:23:01.161882 kubelet[1837]: E0130 13:23:01.161740 1837 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:23:01.164468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:23:01.164607 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:23:01.999060 waagent[1836]: 2025-01-30T13:23:01.998956Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 30 13:23:02.007492 waagent[1836]: 2025-01-30T13:23:02.007394Z INFO Daemon Daemon OS: flatcar 4186.1.0 Jan 30 13:23:02.012467 waagent[1836]: 2025-01-30T13:23:02.012383Z INFO Daemon Daemon Python: 3.11.10 Jan 30 13:23:02.017507 waagent[1836]: 2025-01-30T13:23:02.017425Z INFO Daemon Daemon Run daemon Jan 30 13:23:02.021975 waagent[1836]: 2025-01-30T13:23:02.021910Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4186.1.0' Jan 30 13:23:02.031466 waagent[1836]: 2025-01-30T13:23:02.031376Z INFO Daemon Daemon Using waagent for provisioning Jan 30 13:23:02.037222 waagent[1836]: 2025-01-30T13:23:02.037157Z INFO Daemon Daemon Activate resource disk Jan 30 13:23:02.042126 waagent[1836]: 2025-01-30T13:23:02.042034Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 30 13:23:02.055995 waagent[1836]: 2025-01-30T13:23:02.055909Z INFO Daemon Daemon Found device: None Jan 30 13:23:02.060872 waagent[1836]: 2025-01-30T13:23:02.060795Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 30 13:23:02.069790 waagent[1836]: 2025-01-30T13:23:02.069711Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 30 13:23:02.081196 waagent[1836]: 2025-01-30T13:23:02.081137Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 13:23:02.087014 waagent[1836]: 2025-01-30T13:23:02.086941Z INFO Daemon Daemon Running default provisioning handler Jan 30 13:23:02.099089 waagent[1836]: 2025-01-30T13:23:02.099007Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 30 13:23:02.119140 waagent[1836]: 2025-01-30T13:23:02.113436Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 30 13:23:02.122850 waagent[1836]: 2025-01-30T13:23:02.122781Z INFO Daemon Daemon cloud-init is enabled: False Jan 30 13:23:02.127822 waagent[1836]: 2025-01-30T13:23:02.127761Z INFO Daemon Daemon Copying ovf-env.xml Jan 30 13:23:02.208143 waagent[1836]: 2025-01-30T13:23:02.206656Z INFO Daemon Daemon Successfully mounted dvd Jan 30 13:23:02.247884 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 30 13:23:02.250033 waagent[1836]: 2025-01-30T13:23:02.249893Z INFO Daemon Daemon Detect protocol endpoint Jan 30 13:23:02.255662 waagent[1836]: 2025-01-30T13:23:02.255581Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 13:23:02.262069 waagent[1836]: 2025-01-30T13:23:02.261999Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 30 13:23:02.269133 waagent[1836]: 2025-01-30T13:23:02.268826Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 30 13:23:02.274267 waagent[1836]: 2025-01-30T13:23:02.274204Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 30 13:23:02.280160 waagent[1836]: 2025-01-30T13:23:02.280075Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 30 13:23:02.348997 waagent[1836]: 2025-01-30T13:23:02.348951Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 30 13:23:02.355524 waagent[1836]: 2025-01-30T13:23:02.355492Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 30 13:23:02.360835 waagent[1836]: 2025-01-30T13:23:02.360778Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 30 13:23:02.834141 waagent[1836]: 2025-01-30T13:23:02.834021Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 30 13:23:02.841713 waagent[1836]: 2025-01-30T13:23:02.841636Z INFO Daemon Daemon Forcing an update of the goal state. Jan 30 13:23:02.852418 waagent[1836]: 2025-01-30T13:23:02.852365Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 13:23:02.873068 waagent[1836]: 2025-01-30T13:23:02.873024Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 30 13:23:02.879498 waagent[1836]: 2025-01-30T13:23:02.879449Z INFO Daemon Jan 30 13:23:02.882931 waagent[1836]: 2025-01-30T13:23:02.882874Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 67d399f6-2326-4bac-bfbc-4b92b7772336 eTag: 3697853511274103675 source: Fabric] Jan 30 13:23:02.894944 waagent[1836]: 2025-01-30T13:23:02.894895Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 30 13:23:02.902831 waagent[1836]: 2025-01-30T13:23:02.902783Z INFO Daemon Jan 30 13:23:02.906037 waagent[1836]: 2025-01-30T13:23:02.905984Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 30 13:23:02.917599 waagent[1836]: 2025-01-30T13:23:02.917557Z INFO Daemon Daemon Downloading artifacts profile blob Jan 30 13:23:03.003149 waagent[1836]: 2025-01-30T13:23:03.002177Z INFO Daemon Downloaded certificate {'thumbprint': 'F3B1945C6DEB95EFA3F87AF728C1A410F3714569', 'hasPrivateKey': True} Jan 30 13:23:03.012225 waagent[1836]: 2025-01-30T13:23:03.012175Z INFO Daemon Downloaded certificate {'thumbprint': '48481C1ECA1A5790374CAAE65215FFC568151080', 'hasPrivateKey': False} Jan 30 13:23:03.021769 waagent[1836]: 2025-01-30T13:23:03.021718Z INFO Daemon Fetch goal state completed Jan 30 13:23:03.039780 waagent[1836]: 2025-01-30T13:23:03.039720Z INFO Daemon Daemon Starting provisioning Jan 30 13:23:03.044975 waagent[1836]: 2025-01-30T13:23:03.044903Z INFO Daemon Daemon Handle ovf-env.xml. Jan 30 13:23:03.050041 waagent[1836]: 2025-01-30T13:23:03.049980Z INFO Daemon Daemon Set hostname [ci-4186.1.0-a-4db8cd7df2] Jan 30 13:23:03.061986 waagent[1836]: 2025-01-30T13:23:03.061904Z INFO Daemon Daemon Publish hostname [ci-4186.1.0-a-4db8cd7df2] Jan 30 13:23:03.069151 waagent[1836]: 2025-01-30T13:23:03.069067Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 30 13:23:03.076057 waagent[1836]: 2025-01-30T13:23:03.075996Z INFO Daemon Daemon Primary interface is [eth0] Jan 30 13:23:03.099833 systemd-networkd[1535]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:23:03.100402 systemd-networkd[1535]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:23:03.100469 systemd-networkd[1535]: eth0: DHCP lease lost Jan 30 13:23:03.101305 waagent[1836]: 2025-01-30T13:23:03.101042Z INFO Daemon Daemon Create user account if not exists Jan 30 13:23:03.107461 waagent[1836]: 2025-01-30T13:23:03.107386Z INFO Daemon Daemon User core already exists, skip useradd Jan 30 13:23:03.113409 waagent[1836]: 2025-01-30T13:23:03.113339Z INFO Daemon Daemon Configure sudoer Jan 30 13:23:03.114186 systemd-networkd[1535]: eth0: DHCPv6 lease lost Jan 30 13:23:03.118243 waagent[1836]: 2025-01-30T13:23:03.118164Z INFO Daemon Daemon Configure sshd Jan 30 13:23:03.123044 waagent[1836]: 2025-01-30T13:23:03.122975Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 30 13:23:03.135375 waagent[1836]: 2025-01-30T13:23:03.135304Z INFO Daemon Daemon Deploy ssh public key. Jan 30 13:23:03.151274 systemd-networkd[1535]: eth0: DHCPv4 address 10.200.20.42/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 13:23:04.222760 waagent[1836]: 2025-01-30T13:23:04.222710Z INFO Daemon Daemon Provisioning complete Jan 30 13:23:04.243572 waagent[1836]: 2025-01-30T13:23:04.243514Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 30 13:23:04.250382 waagent[1836]: 2025-01-30T13:23:04.250309Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 30 13:23:04.259770 waagent[1836]: 2025-01-30T13:23:04.259712Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 30 13:23:04.405864 waagent[1912]: 2025-01-30T13:23:04.405311Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 30 13:23:04.405864 waagent[1912]: 2025-01-30T13:23:04.405467Z INFO ExtHandler ExtHandler OS: flatcar 4186.1.0 Jan 30 13:23:04.405864 waagent[1912]: 2025-01-30T13:23:04.405519Z INFO ExtHandler ExtHandler Python: 3.11.10 Jan 30 13:23:04.700141 waagent[1912]: 2025-01-30T13:23:04.700002Z INFO ExtHandler ExtHandler Distro: flatcar-4186.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 30 13:23:04.700335 waagent[1912]: 2025-01-30T13:23:04.700293Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:23:04.700400 waagent[1912]: 2025-01-30T13:23:04.700371Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:23:04.709946 waagent[1912]: 2025-01-30T13:23:04.709855Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 13:23:04.720098 waagent[1912]: 2025-01-30T13:23:04.720053Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 30 13:23:04.720670 waagent[1912]: 2025-01-30T13:23:04.720625Z INFO ExtHandler Jan 30 13:23:04.720740 waagent[1912]: 2025-01-30T13:23:04.720711Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 79cb2f45-5fc3-41c6-bafe-096b1fe7dd97 eTag: 3697853511274103675 source: Fabric] Jan 30 13:23:04.721049 waagent[1912]: 2025-01-30T13:23:04.721011Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 30 13:23:04.749279 waagent[1912]: 2025-01-30T13:23:04.749169Z INFO ExtHandler Jan 30 13:23:04.749376 waagent[1912]: 2025-01-30T13:23:04.749343Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 30 13:23:04.754084 waagent[1912]: 2025-01-30T13:23:04.754045Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 30 13:23:04.838374 waagent[1912]: 2025-01-30T13:23:04.838268Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F3B1945C6DEB95EFA3F87AF728C1A410F3714569', 'hasPrivateKey': True} Jan 30 13:23:04.838843 waagent[1912]: 2025-01-30T13:23:04.838796Z INFO ExtHandler Downloaded certificate {'thumbprint': '48481C1ECA1A5790374CAAE65215FFC568151080', 'hasPrivateKey': False} Jan 30 13:23:04.839987 waagent[1912]: 2025-01-30T13:23:04.839318Z INFO ExtHandler Fetch goal state completed Jan 30 13:23:04.857034 waagent[1912]: 2025-01-30T13:23:04.856955Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1912 Jan 30 13:23:04.857239 waagent[1912]: 2025-01-30T13:23:04.857198Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 30 13:23:04.858969 waagent[1912]: 2025-01-30T13:23:04.858915Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4186.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 30 13:23:04.859415 waagent[1912]: 2025-01-30T13:23:04.859371Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 30 13:23:04.868717 waagent[1912]: 2025-01-30T13:23:04.868670Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 30 13:23:04.868940 waagent[1912]: 2025-01-30T13:23:04.868899Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 30 13:23:04.876668 waagent[1912]: 2025-01-30T13:23:04.876610Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 30 13:23:04.884780 systemd[1]: Reloading requested from client PID 1927 ('systemctl') (unit waagent.service)... Jan 30 13:23:04.884798 systemd[1]: Reloading... Jan 30 13:23:04.982155 zram_generator::config[1982]: No configuration found. Jan 30 13:23:05.061771 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:23:05.142031 systemd[1]: Reloading finished in 256 ms. Jan 30 13:23:05.164595 waagent[1912]: 2025-01-30T13:23:05.164424Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 30 13:23:05.172177 systemd[1]: Reloading requested from client PID 2015 ('systemctl') (unit waagent.service)... Jan 30 13:23:05.172194 systemd[1]: Reloading... Jan 30 13:23:05.258155 zram_generator::config[2056]: No configuration found. Jan 30 13:23:05.355084 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:23:05.439718 systemd[1]: Reloading finished in 267 ms. Jan 30 13:23:05.462148 waagent[1912]: 2025-01-30T13:23:05.461404Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 30 13:23:05.462148 waagent[1912]: 2025-01-30T13:23:05.461601Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 30 13:23:05.558842 waagent[1912]: 2025-01-30T13:23:05.558766Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 30 13:23:05.560147 waagent[1912]: 2025-01-30T13:23:05.559536Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 30 13:23:05.560430 waagent[1912]: 2025-01-30T13:23:05.560371Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 30 13:23:05.560571 waagent[1912]: 2025-01-30T13:23:05.560515Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:23:05.560672 waagent[1912]: 2025-01-30T13:23:05.560636Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:23:05.561254 waagent[1912]: 2025-01-30T13:23:05.561202Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 30 13:23:05.561504 waagent[1912]: 2025-01-30T13:23:05.561414Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 30 13:23:05.562045 waagent[1912]: 2025-01-30T13:23:05.561987Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 30 13:23:05.562045 waagent[1912]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 30 13:23:05.562045 waagent[1912]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 30 13:23:05.562045 waagent[1912]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 30 13:23:05.562045 waagent[1912]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:23:05.562045 waagent[1912]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:23:05.562045 waagent[1912]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:23:05.562461 waagent[1912]: 2025-01-30T13:23:05.562405Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:23:05.562586 waagent[1912]: 2025-01-30T13:23:05.562503Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 30 13:23:05.562710 waagent[1912]: 2025-01-30T13:23:05.562659Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 30 13:23:05.563308 waagent[1912]: 2025-01-30T13:23:05.563157Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 30 13:23:05.563308 waagent[1912]: 2025-01-30T13:23:05.563256Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:23:05.563446 waagent[1912]: 2025-01-30T13:23:05.563405Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 30 13:23:05.563700 waagent[1912]: 2025-01-30T13:23:05.563641Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 30 13:23:05.564077 waagent[1912]: 2025-01-30T13:23:05.563891Z INFO EnvHandler ExtHandler Configure routes Jan 30 13:23:05.564524 waagent[1912]: 2025-01-30T13:23:05.564466Z INFO EnvHandler ExtHandler Gateway:None Jan 30 13:23:05.565740 waagent[1912]: 2025-01-30T13:23:05.565636Z INFO EnvHandler ExtHandler Routes:None Jan 30 13:23:05.575635 waagent[1912]: 2025-01-30T13:23:05.575572Z INFO ExtHandler ExtHandler Jan 30 13:23:05.576252 waagent[1912]: 2025-01-30T13:23:05.575706Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 2c2816f2-26fa-4729-9323-c685a203198b correlation 02e54bf9-6d5c-4996-b384-c9e340a65fe5 created: 2025-01-30T13:21:26.527382Z] Jan 30 13:23:05.576252 waagent[1912]: 2025-01-30T13:23:05.576156Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 30 13:23:05.576815 waagent[1912]: 2025-01-30T13:23:05.576769Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 30 13:23:05.584901 waagent[1912]: 2025-01-30T13:23:05.584846Z INFO MonitorHandler ExtHandler Network interfaces: Jan 30 13:23:05.584901 waagent[1912]: Executing ['ip', '-a', '-o', 'link']: Jan 30 13:23:05.584901 waagent[1912]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 30 13:23:05.584901 waagent[1912]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:09:37 brd ff:ff:ff:ff:ff:ff Jan 30 13:23:05.584901 waagent[1912]: 3: enP36038s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:09:37 brd ff:ff:ff:ff:ff:ff\ altname enP36038p0s2 Jan 30 13:23:05.584901 waagent[1912]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 30 13:23:05.584901 waagent[1912]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 30 13:23:05.584901 waagent[1912]: 2: eth0 inet 10.200.20.42/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 30 13:23:05.584901 waagent[1912]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 30 13:23:05.584901 waagent[1912]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 30 13:23:05.584901 waagent[1912]: 2: eth0 inet6 fe80::222:48ff:fe7b:937/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 13:23:05.584901 waagent[1912]: 3: enP36038s1 inet6 fe80::222:48ff:fe7b:937/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 13:23:05.616992 waagent[1912]: 2025-01-30T13:23:05.616784Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 271E6875-AC64-4761-BCF9-6D848D1E623C;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 30 13:23:05.627236 waagent[1912]: 2025-01-30T13:23:05.627141Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 30 13:23:05.627236 waagent[1912]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:23:05.627236 waagent[1912]: pkts bytes target prot opt in out source destination Jan 30 13:23:05.627236 waagent[1912]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:23:05.627236 waagent[1912]: pkts bytes target prot opt in out source destination Jan 30 13:23:05.627236 waagent[1912]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:23:05.627236 waagent[1912]: pkts bytes target prot opt in out source destination Jan 30 13:23:05.627236 waagent[1912]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 13:23:05.627236 waagent[1912]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 13:23:05.627236 waagent[1912]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 13:23:05.630724 waagent[1912]: 2025-01-30T13:23:05.630635Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 30 13:23:05.630724 waagent[1912]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:23:05.630724 waagent[1912]: pkts bytes target prot opt in out source destination Jan 30 13:23:05.630724 waagent[1912]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:23:05.630724 waagent[1912]: pkts bytes target prot opt in out source destination Jan 30 13:23:05.630724 waagent[1912]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:23:05.630724 waagent[1912]: pkts bytes target prot opt in out source destination Jan 30 13:23:05.630724 waagent[1912]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 13:23:05.630724 waagent[1912]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 13:23:05.630724 waagent[1912]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 13:23:05.630996 waagent[1912]: 2025-01-30T13:23:05.630958Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 30 13:23:11.279789 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:23:11.288305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:23:11.378376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:23:11.392470 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:23:11.471739 kubelet[2145]: E0130 13:23:11.471665 2145 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:23:11.474412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:23:11.474537 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:23:21.529991 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:23:21.538401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:23:21.629803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:23:21.640437 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:23:21.708403 kubelet[2160]: E0130 13:23:21.708296 2160 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:23:21.710545 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:23:21.710688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:23:23.013033 chronyd[1669]: Selected source PHC0 Jan 30 13:23:31.779970 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 13:23:31.790386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:23:31.895890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:23:31.900343 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:23:31.964220 kubelet[2175]: E0130 13:23:31.964099 2175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:23:31.966656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:23:31.966795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:23:33.426104 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:23:33.428361 systemd[1]: Started sshd@0-10.200.20.42:22-10.200.16.10:47412.service - OpenSSH per-connection server daemon (10.200.16.10:47412). Jan 30 13:23:33.906203 sshd[2183]: Accepted publickey for core from 10.200.16.10 port 47412 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:23:33.907481 sshd-session[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:23:33.911321 systemd-logind[1699]: New session 3 of user core. Jan 30 13:23:33.924452 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:23:34.313376 systemd[1]: Started sshd@1-10.200.20.42:22-10.200.16.10:47418.service - OpenSSH per-connection server daemon (10.200.16.10:47418). Jan 30 13:23:34.770800 sshd[2188]: Accepted publickey for core from 10.200.16.10 port 47418 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:23:34.772220 sshd-session[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:23:34.776507 systemd-logind[1699]: New session 4 of user core. Jan 30 13:23:34.787320 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:23:35.090456 sshd[2190]: Connection closed by 10.200.16.10 port 47418 Jan 30 13:23:35.091041 sshd-session[2188]: pam_unix(sshd:session): session closed for user core Jan 30 13:23:35.094210 systemd[1]: sshd@1-10.200.20.42:22-10.200.16.10:47418.service: Deactivated successfully. Jan 30 13:23:35.095695 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:23:35.096334 systemd-logind[1699]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:23:35.097291 systemd-logind[1699]: Removed session 4. Jan 30 13:23:35.170688 systemd[1]: Started sshd@2-10.200.20.42:22-10.200.16.10:47420.service - OpenSSH per-connection server daemon (10.200.16.10:47420). Jan 30 13:23:35.616896 sshd[2195]: Accepted publickey for core from 10.200.16.10 port 47420 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:23:35.618218 sshd-session[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:23:35.621896 systemd-logind[1699]: New session 5 of user core. Jan 30 13:23:35.631339 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:23:35.933839 sshd[2197]: Connection closed by 10.200.16.10 port 47420 Jan 30 13:23:35.933694 sshd-session[2195]: pam_unix(sshd:session): session closed for user core Jan 30 13:23:35.937477 systemd[1]: sshd@2-10.200.20.42:22-10.200.16.10:47420.service: Deactivated successfully. Jan 30 13:23:35.939613 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:23:35.940517 systemd-logind[1699]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:23:35.941318 systemd-logind[1699]: Removed session 5. Jan 30 13:23:36.017232 systemd[1]: Started sshd@3-10.200.20.42:22-10.200.16.10:38192.service - OpenSSH per-connection server daemon (10.200.16.10:38192). Jan 30 13:23:36.463324 sshd[2202]: Accepted publickey for core from 10.200.16.10 port 38192 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:23:36.464679 sshd-session[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:23:36.468513 systemd-logind[1699]: New session 6 of user core. Jan 30 13:23:36.479288 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:23:36.784157 sshd[2204]: Connection closed by 10.200.16.10 port 38192 Jan 30 13:23:36.784020 sshd-session[2202]: pam_unix(sshd:session): session closed for user core Jan 30 13:23:36.787339 systemd[1]: sshd@3-10.200.20.42:22-10.200.16.10:38192.service: Deactivated successfully. Jan 30 13:23:36.788944 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:23:36.790481 systemd-logind[1699]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:23:36.791542 systemd-logind[1699]: Removed session 6. Jan 30 13:23:36.861591 systemd[1]: Started sshd@4-10.200.20.42:22-10.200.16.10:38196.service - OpenSSH per-connection server daemon (10.200.16.10:38196). Jan 30 13:23:37.295604 sshd[2209]: Accepted publickey for core from 10.200.16.10 port 38196 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:23:37.296881 sshd-session[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:23:37.300517 systemd-logind[1699]: New session 7 of user core. Jan 30 13:23:37.315278 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:23:37.570565 sudo[2212]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:23:37.570839 sudo[2212]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:23:37.586457 sudo[2212]: pam_unix(sudo:session): session closed for user root Jan 30 13:23:37.660892 sshd[2211]: Connection closed by 10.200.16.10 port 38196 Jan 30 13:23:37.660748 sshd-session[2209]: pam_unix(sshd:session): session closed for user core Jan 30 13:23:37.664378 systemd-logind[1699]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:23:37.665090 systemd[1]: sshd@4-10.200.20.42:22-10.200.16.10:38196.service: Deactivated successfully. Jan 30 13:23:37.666818 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:23:37.667697 systemd-logind[1699]: Removed session 7. Jan 30 13:23:37.737858 systemd[1]: Started sshd@5-10.200.20.42:22-10.200.16.10:38210.service - OpenSSH per-connection server daemon (10.200.16.10:38210). Jan 30 13:23:38.167319 sshd[2217]: Accepted publickey for core from 10.200.16.10 port 38210 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:23:38.168646 sshd-session[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:23:38.172324 systemd-logind[1699]: New session 8 of user core. Jan 30 13:23:38.180257 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:23:38.410805 sudo[2221]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:23:38.411393 sudo[2221]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:23:38.414206 sudo[2221]: pam_unix(sudo:session): session closed for user root Jan 30 13:23:38.418784 sudo[2220]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:23:38.419385 sudo[2220]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:23:38.430656 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:23:38.453775 augenrules[2243]: No rules Jan 30 13:23:38.454893 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:23:38.456176 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:23:38.457491 sudo[2220]: pam_unix(sudo:session): session closed for user root Jan 30 13:23:38.525140 sshd[2219]: Connection closed by 10.200.16.10 port 38210 Jan 30 13:23:38.525620 sshd-session[2217]: pam_unix(sshd:session): session closed for user core Jan 30 13:23:38.529238 systemd[1]: sshd@5-10.200.20.42:22-10.200.16.10:38210.service: Deactivated successfully. Jan 30 13:23:38.530936 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:23:38.532776 systemd-logind[1699]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:23:38.533546 systemd-logind[1699]: Removed session 8. Jan 30 13:23:38.602041 systemd[1]: Started sshd@6-10.200.20.42:22-10.200.16.10:38222.service - OpenSSH per-connection server daemon (10.200.16.10:38222). Jan 30 13:23:39.027795 sshd[2251]: Accepted publickey for core from 10.200.16.10 port 38222 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:23:39.029088 sshd-session[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:23:39.033729 systemd-logind[1699]: New session 9 of user core. Jan 30 13:23:39.035359 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:23:39.270446 sudo[2254]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:23:39.270705 sudo[2254]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:23:39.689347 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:23:39.690013 (dockerd)[2272]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:23:40.025988 dockerd[2272]: time="2025-01-30T13:23:40.025275197Z" level=info msg="Starting up" Jan 30 13:23:40.286434 dockerd[2272]: time="2025-01-30T13:23:40.286397157Z" level=info msg="Loading containers: start." Jan 30 13:23:40.436178 kernel: Initializing XFRM netlink socket Jan 30 13:23:40.488338 systemd-networkd[1535]: docker0: Link UP Jan 30 13:23:40.529809 dockerd[2272]: time="2025-01-30T13:23:40.529198655Z" level=info msg="Loading containers: done." Jan 30 13:23:40.540232 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3619311307-merged.mount: Deactivated successfully. Jan 30 13:23:40.556884 dockerd[2272]: time="2025-01-30T13:23:40.556835929Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:23:40.557047 dockerd[2272]: time="2025-01-30T13:23:40.556949529Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 13:23:40.557117 dockerd[2272]: time="2025-01-30T13:23:40.557079409Z" level=info msg="Daemon has completed initialization" Jan 30 13:23:40.611152 dockerd[2272]: time="2025-01-30T13:23:40.611075115Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:23:40.611202 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:23:41.534805 containerd[1738]: time="2025-01-30T13:23:41.534761449Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 13:23:41.983315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 13:23:41.989301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:23:42.087330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:23:42.091576 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:23:42.128702 kubelet[2465]: E0130 13:23:42.128646 2465 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:23:42.130536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:23:42.130657 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:23:44.632237 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 30 13:23:45.037702 update_engine[1703]: I20250130 13:23:45.037626 1703 update_attempter.cc:509] Updating boot flags... Jan 30 13:23:45.446217 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2487) Jan 30 13:23:45.576252 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2490) Jan 30 13:23:45.656275 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2490) Jan 30 13:23:47.716200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3621112995.mount: Deactivated successfully. Jan 30 13:23:52.279931 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 13:23:52.288375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:23:52.395707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:23:52.400049 (kubelet)[2654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:23:52.433942 kubelet[2654]: E0130 13:23:52.433891 2654 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:23:52.436459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:23:52.436697 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:23:59.192159 containerd[1738]: time="2025-01-30T13:23:59.191414135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:59.193665 containerd[1738]: time="2025-01-30T13:23:59.193597778Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618070" Jan 30 13:23:59.197809 containerd[1738]: time="2025-01-30T13:23:59.197734184Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:59.205750 containerd[1738]: time="2025-01-30T13:23:59.205681915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:59.206916 containerd[1738]: time="2025-01-30T13:23:59.206722117Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 17.671915148s" Jan 30 13:23:59.206916 containerd[1738]: time="2025-01-30T13:23:59.206758197Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 30 13:23:59.207587 containerd[1738]: time="2025-01-30T13:23:59.207448878Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 13:24:00.790822 containerd[1738]: time="2025-01-30T13:24:00.790761187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:00.799889 containerd[1738]: time="2025-01-30T13:24:00.799644599Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469467" Jan 30 13:24:00.802481 containerd[1738]: time="2025-01-30T13:24:00.802426764Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:00.809950 containerd[1738]: time="2025-01-30T13:24:00.809884174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:00.811023 containerd[1738]: time="2025-01-30T13:24:00.810878696Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 1.603399378s" Jan 30 13:24:00.811023 containerd[1738]: time="2025-01-30T13:24:00.810918696Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 30 13:24:00.811933 containerd[1738]: time="2025-01-30T13:24:00.811608297Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 13:24:02.133147 containerd[1738]: time="2025-01-30T13:24:02.132348863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:02.134885 containerd[1738]: time="2025-01-30T13:24:02.134838506Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024217" Jan 30 13:24:02.140730 containerd[1738]: time="2025-01-30T13:24:02.140676315Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:02.146061 containerd[1738]: time="2025-01-30T13:24:02.146015963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:02.147259 containerd[1738]: time="2025-01-30T13:24:02.147104524Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.335465747s" Jan 30 13:24:02.147259 containerd[1738]: time="2025-01-30T13:24:02.147155924Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 30 13:24:02.148467 containerd[1738]: time="2025-01-30T13:24:02.148300326Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 13:24:02.529855 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 13:24:02.538329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:24:02.635669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:02.639952 (kubelet)[2721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:24:02.673796 kubelet[2721]: E0130 13:24:02.673711 2721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:24:02.675894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:24:02.676037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:24:03.621177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4271851181.mount: Deactivated successfully. Jan 30 13:24:04.036098 containerd[1738]: time="2025-01-30T13:24:04.036034009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:04.038198 containerd[1738]: time="2025-01-30T13:24:04.038158651Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772117" Jan 30 13:24:04.040791 containerd[1738]: time="2025-01-30T13:24:04.040746215Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:04.045973 containerd[1738]: time="2025-01-30T13:24:04.045928661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:04.046949 containerd[1738]: time="2025-01-30T13:24:04.046789262Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.898460336s" Jan 30 13:24:04.046949 containerd[1738]: time="2025-01-30T13:24:04.046818622Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 30 13:24:04.047617 containerd[1738]: time="2025-01-30T13:24:04.047541103Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:24:04.592527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2062788584.mount: Deactivated successfully. Jan 30 13:24:05.812069 containerd[1738]: time="2025-01-30T13:24:05.812022889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:05.816028 containerd[1738]: time="2025-01-30T13:24:05.815972494Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 30 13:24:05.820926 containerd[1738]: time="2025-01-30T13:24:05.820788060Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:06.354020 containerd[1738]: time="2025-01-30T13:24:06.353959812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:06.354817 containerd[1738]: time="2025-01-30T13:24:06.354622453Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.30701875s" Jan 30 13:24:06.354817 containerd[1738]: time="2025-01-30T13:24:06.354657773Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 13:24:06.355953 containerd[1738]: time="2025-01-30T13:24:06.355856375Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:24:07.056931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659191905.mount: Deactivated successfully. Jan 30 13:24:07.091821 containerd[1738]: time="2025-01-30T13:24:07.091099622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:07.093825 containerd[1738]: time="2025-01-30T13:24:07.093775426Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 30 13:24:07.096951 containerd[1738]: time="2025-01-30T13:24:07.096903270Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:07.102320 containerd[1738]: time="2025-01-30T13:24:07.102281596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:07.103442 containerd[1738]: time="2025-01-30T13:24:07.102939877Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 747.049902ms" Jan 30 13:24:07.103442 containerd[1738]: time="2025-01-30T13:24:07.102972677Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 30 13:24:07.103666 containerd[1738]: time="2025-01-30T13:24:07.103637438Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 13:24:07.797253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1272443886.mount: Deactivated successfully. Jan 30 13:24:11.415065 containerd[1738]: time="2025-01-30T13:24:11.414266642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:11.416452 containerd[1738]: time="2025-01-30T13:24:11.416255728Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Jan 30 13:24:11.420856 containerd[1738]: time="2025-01-30T13:24:11.418940256Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:11.424779 containerd[1738]: time="2025-01-30T13:24:11.424263352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:11.425538 containerd[1738]: time="2025-01-30T13:24:11.425502876Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.321832278s" Jan 30 13:24:11.425538 containerd[1738]: time="2025-01-30T13:24:11.425535036Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 30 13:24:12.779802 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 30 13:24:12.786204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:24:12.934275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:12.943355 (kubelet)[2865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:24:12.981116 kubelet[2865]: E0130 13:24:12.979672 2865 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:24:12.982067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:24:12.982217 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:24:16.788284 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:16.795378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:24:16.827854 systemd[1]: Reloading requested from client PID 2880 ('systemctl') (unit session-9.scope)... Jan 30 13:24:16.828005 systemd[1]: Reloading... Jan 30 13:24:16.935177 zram_generator::config[2920]: No configuration found. Jan 30 13:24:17.042128 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:24:17.122053 systemd[1]: Reloading finished in 293 ms. Jan 30 13:24:17.170170 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:24:17.170259 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:24:17.170642 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:17.172986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:24:17.278278 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:17.283301 (kubelet)[2987]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:24:17.323232 kubelet[2987]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:24:17.323232 kubelet[2987]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:24:17.323232 kubelet[2987]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:24:17.323232 kubelet[2987]: I0130 13:24:17.322403 2987 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:24:18.067061 kubelet[2987]: I0130 13:24:18.066980 2987 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:24:18.067061 kubelet[2987]: I0130 13:24:18.067052 2987 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:24:18.067405 kubelet[2987]: I0130 13:24:18.067359 2987 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:24:18.092398 kubelet[2987]: E0130 13:24:18.092353 2987 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:24:18.093774 kubelet[2987]: I0130 13:24:18.093612 2987 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:24:18.104286 kubelet[2987]: E0130 13:24:18.104060 2987 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:24:18.104286 kubelet[2987]: I0130 13:24:18.104093 2987 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:24:18.108553 kubelet[2987]: I0130 13:24:18.108336 2987 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:24:18.109129 kubelet[2987]: I0130 13:24:18.109099 2987 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:24:18.109407 kubelet[2987]: I0130 13:24:18.109354 2987 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:24:18.109975 kubelet[2987]: I0130 13:24:18.109460 2987 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-4db8cd7df2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:24:18.109975 kubelet[2987]: I0130 13:24:18.109671 2987 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:24:18.109975 kubelet[2987]: I0130 13:24:18.109681 2987 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:24:18.109975 kubelet[2987]: I0130 13:24:18.109818 2987 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:24:18.111555 kubelet[2987]: I0130 13:24:18.111530 2987 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:24:18.111645 kubelet[2987]: I0130 13:24:18.111632 2987 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:24:18.111747 kubelet[2987]: I0130 13:24:18.111739 2987 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:24:18.111834 kubelet[2987]: I0130 13:24:18.111826 2987 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:24:18.116609 kubelet[2987]: W0130 13:24:18.115324 2987 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-4db8cd7df2&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Jan 30 13:24:18.116609 kubelet[2987]: E0130 13:24:18.115450 2987 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-4db8cd7df2&limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:24:18.116609 kubelet[2987]: W0130 13:24:18.115837 2987 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Jan 30 13:24:18.116609 kubelet[2987]: E0130 13:24:18.115874 2987 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:24:18.116609 kubelet[2987]: I0130 13:24:18.116478 2987 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:24:18.118975 kubelet[2987]: I0130 13:24:18.118948 2987 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:24:18.120379 kubelet[2987]: W0130 13:24:18.119768 2987 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:24:18.120837 kubelet[2987]: I0130 13:24:18.120777 2987 server.go:1269] "Started kubelet" Jan 30 13:24:18.122279 kubelet[2987]: I0130 13:24:18.121885 2987 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:24:18.123212 kubelet[2987]: I0130 13:24:18.123195 2987 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:24:18.123925 kubelet[2987]: I0130 13:24:18.123823 2987 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:24:18.124503 kubelet[2987]: I0130 13:24:18.124484 2987 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:24:18.125287 kubelet[2987]: E0130 13:24:18.124246 2987 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.42:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-a-4db8cd7df2.181f7b372e6f2947 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-4db8cd7df2,UID:ci-4186.1.0-a-4db8cd7df2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-4db8cd7df2,},FirstTimestamp:2025-01-30 13:24:18.120755527 +0000 UTC m=+0.834380978,LastTimestamp:2025-01-30 13:24:18.120755527 +0000 UTC m=+0.834380978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-4db8cd7df2,}" Jan 30 13:24:18.126921 kubelet[2987]: I0130 13:24:18.126903 2987 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:24:18.127617 kubelet[2987]: I0130 13:24:18.127542 2987 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:24:18.130340 kubelet[2987]: E0130 13:24:18.130272 2987 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:24:18.131145 kubelet[2987]: I0130 13:24:18.130862 2987 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:24:18.131145 kubelet[2987]: I0130 13:24:18.131015 2987 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:24:18.131145 kubelet[2987]: I0130 13:24:18.131104 2987 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:24:18.132119 kubelet[2987]: W0130 13:24:18.132048 2987 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Jan 30 13:24:18.132226 kubelet[2987]: E0130 13:24:18.132126 2987 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:24:18.132590 kubelet[2987]: I0130 13:24:18.132416 2987 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:24:18.132590 kubelet[2987]: I0130 13:24:18.132531 2987 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:24:18.134903 kubelet[2987]: E0130 13:24:18.134698 2987 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-4db8cd7df2\" not found" Jan 30 13:24:18.134903 kubelet[2987]: E0130 13:24:18.134825 2987 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-4db8cd7df2?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="200ms" Jan 30 13:24:18.135043 kubelet[2987]: I0130 13:24:18.135001 2987 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:24:18.184914 kubelet[2987]: I0130 13:24:18.184830 2987 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:24:18.186588 kubelet[2987]: I0130 13:24:18.186539 2987 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:24:18.186588 kubelet[2987]: I0130 13:24:18.186579 2987 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:24:18.186742 kubelet[2987]: I0130 13:24:18.186600 2987 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:24:18.186742 kubelet[2987]: E0130 13:24:18.186675 2987 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:24:18.189803 kubelet[2987]: W0130 13:24:18.189638 2987 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Jan 30 13:24:18.189803 kubelet[2987]: E0130 13:24:18.189716 2987 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:24:18.235845 kubelet[2987]: E0130 13:24:18.235771 2987 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-4db8cd7df2\" not found" Jan 30 13:24:18.278450 kubelet[2987]: I0130 13:24:18.278418 2987 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:24:18.278450 kubelet[2987]: I0130 13:24:18.278440 2987 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:24:18.278450 kubelet[2987]: I0130 13:24:18.278461 2987 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:24:18.287012 kubelet[2987]: E0130 13:24:18.286977 2987 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:24:18.288385 kubelet[2987]: I0130 13:24:18.288366 2987 policy_none.go:49] "None policy: Start" Jan 30 13:24:18.289412 kubelet[2987]: I0130 13:24:18.289355 2987 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:24:18.289412 kubelet[2987]: I0130 13:24:18.289414 2987 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:24:18.301405 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:24:18.312798 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:24:18.316501 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:24:18.328103 kubelet[2987]: I0130 13:24:18.327032 2987 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:24:18.328103 kubelet[2987]: I0130 13:24:18.327252 2987 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:24:18.328103 kubelet[2987]: I0130 13:24:18.327264 2987 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:24:18.328103 kubelet[2987]: I0130 13:24:18.327891 2987 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:24:18.329491 kubelet[2987]: E0130 13:24:18.328982 2987 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.0-a-4db8cd7df2\" not found" Jan 30 13:24:18.336173 kubelet[2987]: E0130 13:24:18.336136 2987 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-4db8cd7df2?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="400ms" Jan 30 13:24:18.429733 kubelet[2987]: I0130 13:24:18.429689 2987 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:18.430152 kubelet[2987]: E0130 13:24:18.430103 2987 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:18.496964 systemd[1]: Created slice kubepods-burstable-podc534504db9989cbb7ca8d5dd15637db1.slice - libcontainer container kubepods-burstable-podc534504db9989cbb7ca8d5dd15637db1.slice. Jan 30 13:24:18.508196 systemd[1]: Created slice kubepods-burstable-pod0a1bb7ec26ffe761dc18786603dbfbcd.slice - libcontainer container kubepods-burstable-pod0a1bb7ec26ffe761dc18786603dbfbcd.slice. Jan 30 13:24:18.520933 systemd[1]: Created slice kubepods-burstable-pod5192eb7cd254f46d6746d18119849980.slice - libcontainer container kubepods-burstable-pod5192eb7cd254f46d6746d18119849980.slice. Jan 30 13:24:18.532813 kubelet[2987]: I0130 13:24:18.532785 2987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a1bb7ec26ffe761dc18786603dbfbcd-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-4db8cd7df2\" (UID: \"0a1bb7ec26ffe761dc18786603dbfbcd\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:18.532972 kubelet[2987]: I0130 13:24:18.532957 2987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5192eb7cd254f46d6746d18119849980-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-4db8cd7df2\" (UID: \"5192eb7cd254f46d6746d18119849980\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:18.533041 kubelet[2987]: I0130 13:24:18.533025 2987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5192eb7cd254f46d6746d18119849980-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-4db8cd7df2\" (UID: \"5192eb7cd254f46d6746d18119849980\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:18.533188 kubelet[2987]: I0130 13:24:18.533176 2987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c534504db9989cbb7ca8d5dd15637db1-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-4db8cd7df2\" (UID: \"c534504db9989cbb7ca8d5dd15637db1\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:18.533390 kubelet[2987]: I0130 13:24:18.533275 2987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c534504db9989cbb7ca8d5dd15637db1-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-4db8cd7df2\" (UID: \"c534504db9989cbb7ca8d5dd15637db1\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:18.533390 kubelet[2987]: I0130 13:24:18.533294 2987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c534504db9989cbb7ca8d5dd15637db1-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-4db8cd7df2\" (UID: \"c534504db9989cbb7ca8d5dd15637db1\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:18.533390 kubelet[2987]: I0130 13:24:18.533311 2987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c534504db9989cbb7ca8d5dd15637db1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-4db8cd7df2\" (UID: \"c534504db9989cbb7ca8d5dd15637db1\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:18.533390 kubelet[2987]: I0130 13:24:18.533335 2987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c534504db9989cbb7ca8d5dd15637db1-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-4db8cd7df2\" (UID: \"c534504db9989cbb7ca8d5dd15637db1\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:18.533390 kubelet[2987]: I0130 13:24:18.533351 2987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5192eb7cd254f46d6746d18119849980-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-4db8cd7df2\" (UID: \"5192eb7cd254f46d6746d18119849980\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:18.632215 kubelet[2987]: I0130 13:24:18.632097 2987 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:18.632812 kubelet[2987]: E0130 13:24:18.632776 2987 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:18.737103 kubelet[2987]: E0130 13:24:18.737059 2987 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-4db8cd7df2?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="800ms" Jan 30 13:24:18.807220 containerd[1738]: time="2025-01-30T13:24:18.806936349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-4db8cd7df2,Uid:c534504db9989cbb7ca8d5dd15637db1,Namespace:kube-system,Attempt:0,}" Jan 30 13:24:18.811259 containerd[1738]: time="2025-01-30T13:24:18.811217155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-4db8cd7df2,Uid:0a1bb7ec26ffe761dc18786603dbfbcd,Namespace:kube-system,Attempt:0,}" Jan 30 13:24:18.824584 containerd[1738]: time="2025-01-30T13:24:18.824323612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-4db8cd7df2,Uid:5192eb7cd254f46d6746d18119849980,Namespace:kube-system,Attempt:0,}" Jan 30 13:24:19.034496 kubelet[2987]: I0130 13:24:19.034464 2987 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:19.034869 kubelet[2987]: E0130 13:24:19.034825 2987 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:19.145777 kubelet[2987]: W0130 13:24:19.145721 2987 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Jan 30 13:24:19.145925 kubelet[2987]: E0130 13:24:19.145786 2987 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:24:19.241997 kubelet[2987]: W0130 13:24:19.241903 2987 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Jan 30 13:24:19.241997 kubelet[2987]: E0130 13:24:19.241964 2987 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:24:19.382183 kubelet[2987]: W0130 13:24:19.382006 2987 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-4db8cd7df2&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Jan 30 13:24:19.382183 kubelet[2987]: E0130 13:24:19.382069 2987 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-4db8cd7df2&limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:24:19.453641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2255876103.mount: Deactivated successfully. Jan 30 13:24:19.483319 containerd[1738]: time="2025-01-30T13:24:19.483246871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:24:19.494690 containerd[1738]: time="2025-01-30T13:24:19.494643046Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 30 13:24:19.499260 containerd[1738]: time="2025-01-30T13:24:19.499223012Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:24:19.505137 containerd[1738]: time="2025-01-30T13:24:19.504622059Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:24:19.512133 containerd[1738]: time="2025-01-30T13:24:19.512060908Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:24:19.517911 containerd[1738]: time="2025-01-30T13:24:19.517141915Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:24:19.521756 containerd[1738]: time="2025-01-30T13:24:19.521710481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:24:19.522614 containerd[1738]: time="2025-01-30T13:24:19.522588042Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 715.577253ms" Jan 30 13:24:19.525300 containerd[1738]: time="2025-01-30T13:24:19.525255166Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:24:19.531665 containerd[1738]: time="2025-01-30T13:24:19.531475134Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 707.068682ms" Jan 30 13:24:19.538297 kubelet[2987]: E0130 13:24:19.538248 2987 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-4db8cd7df2?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="1.6s" Jan 30 13:24:19.546813 kubelet[2987]: W0130 13:24:19.546779 2987 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Jan 30 13:24:19.547001 kubelet[2987]: E0130 13:24:19.546970 2987 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:24:19.553870 containerd[1738]: time="2025-01-30T13:24:19.553760322Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 742.455487ms" Jan 30 13:24:19.763333 containerd[1738]: time="2025-01-30T13:24:19.763070354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:24:19.763333 containerd[1738]: time="2025-01-30T13:24:19.763161034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:24:19.763333 containerd[1738]: time="2025-01-30T13:24:19.763183074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:24:19.764997 containerd[1738]: time="2025-01-30T13:24:19.764939876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:24:19.766124 containerd[1738]: time="2025-01-30T13:24:19.765939878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:24:19.766760 containerd[1738]: time="2025-01-30T13:24:19.766004278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:24:19.766826 containerd[1738]: time="2025-01-30T13:24:19.766751799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:24:19.766880 containerd[1738]: time="2025-01-30T13:24:19.766857799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:24:19.768330 containerd[1738]: time="2025-01-30T13:24:19.768256721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:24:19.768330 containerd[1738]: time="2025-01-30T13:24:19.768306481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:24:19.768462 containerd[1738]: time="2025-01-30T13:24:19.768324201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:24:19.768462 containerd[1738]: time="2025-01-30T13:24:19.768383881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:24:19.792341 systemd[1]: Started cri-containerd-491ba1224a01cdf934c050a0e1caa0aeadc11d410a35101a550f86566396a1d1.scope - libcontainer container 491ba1224a01cdf934c050a0e1caa0aeadc11d410a35101a550f86566396a1d1. Jan 30 13:24:19.793411 systemd[1]: Started cri-containerd-d27ad96ccfb2b69db551d30bceef393da4e563a4326defebf0e655c05928eb5b.scope - libcontainer container d27ad96ccfb2b69db551d30bceef393da4e563a4326defebf0e655c05928eb5b. Jan 30 13:24:19.800701 systemd[1]: Started cri-containerd-e204e53bc527b895a3354d9ce95fe4b2bf5a8551e9daeaae92a92cbd296d74e2.scope - libcontainer container e204e53bc527b895a3354d9ce95fe4b2bf5a8551e9daeaae92a92cbd296d74e2. Jan 30 13:24:19.837856 kubelet[2987]: I0130 13:24:19.837774 2987 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:19.838565 kubelet[2987]: E0130 13:24:19.838154 2987 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:19.854886 containerd[1738]: time="2025-01-30T13:24:19.854478512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-4db8cd7df2,Uid:5192eb7cd254f46d6746d18119849980,Namespace:kube-system,Attempt:0,} returns sandbox id \"d27ad96ccfb2b69db551d30bceef393da4e563a4326defebf0e655c05928eb5b\"" Jan 30 13:24:19.860908 containerd[1738]: time="2025-01-30T13:24:19.860872361Z" level=info msg="CreateContainer within sandbox \"d27ad96ccfb2b69db551d30bceef393da4e563a4326defebf0e655c05928eb5b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:24:19.861942 containerd[1738]: time="2025-01-30T13:24:19.861907202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-4db8cd7df2,Uid:c534504db9989cbb7ca8d5dd15637db1,Namespace:kube-system,Attempt:0,} returns sandbox id \"491ba1224a01cdf934c050a0e1caa0aeadc11d410a35101a550f86566396a1d1\"" Jan 30 13:24:19.866286 containerd[1738]: time="2025-01-30T13:24:19.866251128Z" level=info msg="CreateContainer within sandbox \"491ba1224a01cdf934c050a0e1caa0aeadc11d410a35101a550f86566396a1d1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:24:19.870724 containerd[1738]: time="2025-01-30T13:24:19.870674213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-4db8cd7df2,Uid:0a1bb7ec26ffe761dc18786603dbfbcd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e204e53bc527b895a3354d9ce95fe4b2bf5a8551e9daeaae92a92cbd296d74e2\"" Jan 30 13:24:19.873731 containerd[1738]: time="2025-01-30T13:24:19.873695977Z" level=info msg="CreateContainer within sandbox \"e204e53bc527b895a3354d9ce95fe4b2bf5a8551e9daeaae92a92cbd296d74e2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:24:19.926687 containerd[1738]: time="2025-01-30T13:24:19.926639846Z" level=info msg="CreateContainer within sandbox \"d27ad96ccfb2b69db551d30bceef393da4e563a4326defebf0e655c05928eb5b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"71054244907be2cdb1507b061b6964362b8a10b4ea63854fa4b0e6c6880263d8\"" Jan 30 13:24:19.927465 containerd[1738]: time="2025-01-30T13:24:19.927440807Z" level=info msg="StartContainer for \"71054244907be2cdb1507b061b6964362b8a10b4ea63854fa4b0e6c6880263d8\"" Jan 30 13:24:19.935188 containerd[1738]: time="2025-01-30T13:24:19.935141617Z" level=info msg="CreateContainer within sandbox \"e204e53bc527b895a3354d9ce95fe4b2bf5a8551e9daeaae92a92cbd296d74e2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"430e080cf27dbb21437f0af6f4acc9d38913ad5e8c046ca04132975662188a99\"" Jan 30 13:24:19.936184 containerd[1738]: time="2025-01-30T13:24:19.936158938Z" level=info msg="StartContainer for \"430e080cf27dbb21437f0af6f4acc9d38913ad5e8c046ca04132975662188a99\"" Jan 30 13:24:19.942847 containerd[1738]: time="2025-01-30T13:24:19.942701067Z" level=info msg="CreateContainer within sandbox \"491ba1224a01cdf934c050a0e1caa0aeadc11d410a35101a550f86566396a1d1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"846579a7ddac237d37989620c1452ad91f8d4219bbd1d295b0bdb463ec36ea40\"" Jan 30 13:24:19.943546 containerd[1738]: time="2025-01-30T13:24:19.943457028Z" level=info msg="StartContainer for \"846579a7ddac237d37989620c1452ad91f8d4219bbd1d295b0bdb463ec36ea40\"" Jan 30 13:24:19.958286 systemd[1]: Started cri-containerd-71054244907be2cdb1507b061b6964362b8a10b4ea63854fa4b0e6c6880263d8.scope - libcontainer container 71054244907be2cdb1507b061b6964362b8a10b4ea63854fa4b0e6c6880263d8. Jan 30 13:24:19.982456 systemd[1]: Started cri-containerd-430e080cf27dbb21437f0af6f4acc9d38913ad5e8c046ca04132975662188a99.scope - libcontainer container 430e080cf27dbb21437f0af6f4acc9d38913ad5e8c046ca04132975662188a99. Jan 30 13:24:19.990301 systemd[1]: Started cri-containerd-846579a7ddac237d37989620c1452ad91f8d4219bbd1d295b0bdb463ec36ea40.scope - libcontainer container 846579a7ddac237d37989620c1452ad91f8d4219bbd1d295b0bdb463ec36ea40. Jan 30 13:24:20.043241 containerd[1738]: time="2025-01-30T13:24:20.043196237Z" level=info msg="StartContainer for \"71054244907be2cdb1507b061b6964362b8a10b4ea63854fa4b0e6c6880263d8\" returns successfully" Jan 30 13:24:20.078303 containerd[1738]: time="2025-01-30T13:24:20.076131960Z" level=info msg="StartContainer for \"430e080cf27dbb21437f0af6f4acc9d38913ad5e8c046ca04132975662188a99\" returns successfully" Jan 30 13:24:20.078303 containerd[1738]: time="2025-01-30T13:24:20.076264800Z" level=info msg="StartContainer for \"846579a7ddac237d37989620c1452ad91f8d4219bbd1d295b0bdb463ec36ea40\" returns successfully" Jan 30 13:24:21.442173 kubelet[2987]: I0130 13:24:21.440978 2987 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:22.091465 kubelet[2987]: E0130 13:24:22.091421 2987 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.0-a-4db8cd7df2\" not found" node="ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:22.118123 kubelet[2987]: I0130 13:24:22.117692 2987 apiserver.go:52] "Watching apiserver" Jan 30 13:24:22.128374 kubelet[2987]: I0130 13:24:22.128160 2987 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:22.128374 kubelet[2987]: E0130 13:24:22.128191 2987 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4186.1.0-a-4db8cd7df2\": node \"ci-4186.1.0-a-4db8cd7df2\" not found" Jan 30 13:24:22.131606 kubelet[2987]: I0130 13:24:22.131560 2987 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:24:22.175398 kubelet[2987]: E0130 13:24:22.175292 2987 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186.1.0-a-4db8cd7df2.181f7b372e6f2947 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-4db8cd7df2,UID:ci-4186.1.0-a-4db8cd7df2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-4db8cd7df2,},FirstTimestamp:2025-01-30 13:24:18.120755527 +0000 UTC m=+0.834380978,LastTimestamp:2025-01-30 13:24:18.120755527 +0000 UTC m=+0.834380978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-4db8cd7df2,}" Jan 30 13:24:22.249203 kubelet[2987]: E0130 13:24:22.248225 2987 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186.1.0-a-4db8cd7df2.181f7b372f00201c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-4db8cd7df2,UID:ci-4186.1.0-a-4db8cd7df2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-4db8cd7df2,},FirstTimestamp:2025-01-30 13:24:18.1302559 +0000 UTC m=+0.843881351,LastTimestamp:2025-01-30 13:24:18.1302559 +0000 UTC m=+0.843881351,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-4db8cd7df2,}" Jan 30 13:24:22.305777 kubelet[2987]: E0130 13:24:22.305530 2987 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186.1.0-a-4db8cd7df2.181f7b3737ca26a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-4db8cd7df2,UID:ci-4186.1.0-a-4db8cd7df2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4186.1.0-a-4db8cd7df2 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-4db8cd7df2,},FirstTimestamp:2025-01-30 13:24:18.277713573 +0000 UTC m=+0.991339024,LastTimestamp:2025-01-30 13:24:18.277713573 +0000 UTC m=+0.991339024,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-4db8cd7df2,}" Jan 30 13:24:22.851264 kubelet[2987]: W0130 13:24:22.851145 2987 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:24:24.239826 kubelet[2987]: W0130 13:24:24.239360 2987 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:24:24.244222 systemd[1]: Reloading requested from client PID 3260 ('systemctl') (unit session-9.scope)... Jan 30 13:24:24.244534 systemd[1]: Reloading... Jan 30 13:24:24.330117 zram_generator::config[3300]: No configuration found. Jan 30 13:24:24.387201 kubelet[2987]: W0130 13:24:24.387001 2987 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:24:24.446072 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:24:24.538556 systemd[1]: Reloading finished in 293 ms. Jan 30 13:24:24.573318 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:24:24.589440 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:24:24.589818 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:24.589874 systemd[1]: kubelet.service: Consumed 1.218s CPU time, 117.1M memory peak, 0B memory swap peak. Jan 30 13:24:24.596451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:24:24.805402 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:24.810899 (kubelet)[3363]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:24:24.853516 kubelet[3363]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:24:24.855073 kubelet[3363]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:24:24.855073 kubelet[3363]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:24:24.855779 kubelet[3363]: I0130 13:24:24.855157 3363 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:24:24.863101 kubelet[3363]: I0130 13:24:24.863058 3363 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:24:24.863101 kubelet[3363]: I0130 13:24:24.863093 3363 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:24:24.863410 kubelet[3363]: I0130 13:24:24.863389 3363 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:24:24.865714 kubelet[3363]: I0130 13:24:24.865683 3363 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:24:24.870628 kubelet[3363]: I0130 13:24:24.870276 3363 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:24:24.877440 kubelet[3363]: E0130 13:24:24.877402 3363 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:24:24.877440 kubelet[3363]: I0130 13:24:24.877436 3363 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:24:24.880709 kubelet[3363]: I0130 13:24:24.880656 3363 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:24:24.880900 kubelet[3363]: I0130 13:24:24.880809 3363 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:24:24.880939 kubelet[3363]: I0130 13:24:24.880912 3363 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:24:24.881586 kubelet[3363]: I0130 13:24:24.880943 3363 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-4db8cd7df2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:24:24.881586 kubelet[3363]: I0130 13:24:24.881316 3363 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:24:24.881586 kubelet[3363]: I0130 13:24:24.881327 3363 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:24:24.881586 kubelet[3363]: I0130 13:24:24.881361 3363 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:24:24.881586 kubelet[3363]: I0130 13:24:24.881478 3363 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:24:24.881784 kubelet[3363]: I0130 13:24:24.881489 3363 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:24:24.881784 kubelet[3363]: I0130 13:24:24.881519 3363 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:24:24.881784 kubelet[3363]: I0130 13:24:24.881531 3363 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:24:24.885470 kubelet[3363]: I0130 13:24:24.885439 3363 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:24:24.885986 kubelet[3363]: I0130 13:24:24.885956 3363 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:24:24.887143 kubelet[3363]: I0130 13:24:24.886454 3363 server.go:1269] "Started kubelet" Jan 30 13:24:24.896151 kubelet[3363]: I0130 13:24:24.895009 3363 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:24:24.909064 kubelet[3363]: I0130 13:24:24.908626 3363 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:24:24.909899 kubelet[3363]: I0130 13:24:24.909583 3363 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:24:24.910689 kubelet[3363]: I0130 13:24:24.910478 3363 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:24:24.910762 kubelet[3363]: I0130 13:24:24.910734 3363 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:24:24.910993 kubelet[3363]: I0130 13:24:24.910967 3363 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:24:24.912573 kubelet[3363]: I0130 13:24:24.912099 3363 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:24:24.916926 kubelet[3363]: E0130 13:24:24.916893 3363 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-4db8cd7df2\" not found" Jan 30 13:24:24.920826 kubelet[3363]: I0130 13:24:24.920419 3363 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:24:24.920826 kubelet[3363]: I0130 13:24:24.920691 3363 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:24:24.923489 kubelet[3363]: I0130 13:24:24.923452 3363 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:24:24.925214 kubelet[3363]: I0130 13:24:24.925092 3363 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:24:24.925214 kubelet[3363]: I0130 13:24:24.925217 3363 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:24:24.925307 kubelet[3363]: I0130 13:24:24.925236 3363 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:24:24.925307 kubelet[3363]: E0130 13:24:24.925277 3363 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:24:24.932130 kubelet[3363]: I0130 13:24:24.932066 3363 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:24:24.932266 kubelet[3363]: I0130 13:24:24.932196 3363 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:24:24.934236 kubelet[3363]: E0130 13:24:24.933699 3363 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:24:24.936502 kubelet[3363]: I0130 13:24:24.936278 3363 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:24:24.983087 kubelet[3363]: I0130 13:24:24.983059 3363 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:24:24.983262 kubelet[3363]: I0130 13:24:24.983250 3363 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:24:24.983321 kubelet[3363]: I0130 13:24:24.983313 3363 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:24:24.983641 kubelet[3363]: I0130 13:24:24.983520 3363 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:24:24.983641 kubelet[3363]: I0130 13:24:24.983534 3363 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:24:24.983641 kubelet[3363]: I0130 13:24:24.983552 3363 policy_none.go:49] "None policy: Start" Jan 30 13:24:24.984267 kubelet[3363]: I0130 13:24:24.984244 3363 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:24:24.984267 kubelet[3363]: I0130 13:24:24.984270 3363 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:24:24.984477 kubelet[3363]: I0130 13:24:24.984458 3363 state_mem.go:75] "Updated machine memory state" Jan 30 13:24:24.988769 kubelet[3363]: I0130 13:24:24.988730 3363 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:24:24.988940 kubelet[3363]: I0130 13:24:24.988920 3363 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:24:24.988974 kubelet[3363]: I0130 13:24:24.988936 3363 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:24:24.989565 kubelet[3363]: I0130 13:24:24.989533 3363 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:24:25.040153 kubelet[3363]: W0130 13:24:25.040085 3363 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:24:25.040318 kubelet[3363]: E0130 13:24:25.040175 3363 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186.1.0-a-4db8cd7df2\" already exists" pod="kube-system/kube-scheduler-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:25.040458 kubelet[3363]: W0130 13:24:25.040403 3363 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:24:25.040490 kubelet[3363]: E0130 13:24:25.040481 3363 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-a-4db8cd7df2\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:25.040545 kubelet[3363]: W0130 13:24:25.040430 3363 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:24:25.040575 kubelet[3363]: E0130 13:24:25.040554 3363 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186.1.0-a-4db8cd7df2\" already exists" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:27.791524 kubelet[3363]: I0130 13:24:25.092174 3363 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:27.791524 kubelet[3363]: I0130 13:24:25.106578 3363 kubelet_node_status.go:111] "Node was previously registered" node="ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:27.791524 kubelet[3363]: I0130 13:24:25.122416 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5192eb7cd254f46d6746d18119849980-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-4db8cd7df2\" (UID: \"5192eb7cd254f46d6746d18119849980\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:27.791524 kubelet[3363]: I0130 13:24:25.122448 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c534504db9989cbb7ca8d5dd15637db1-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-4db8cd7df2\" (UID: \"c534504db9989cbb7ca8d5dd15637db1\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:27.791524 kubelet[3363]: I0130 13:24:25.122469 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c534504db9989cbb7ca8d5dd15637db1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-4db8cd7df2\" (UID: \"c534504db9989cbb7ca8d5dd15637db1\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:27.791524 kubelet[3363]: I0130 13:24:25.122487 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a1bb7ec26ffe761dc18786603dbfbcd-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-4db8cd7df2\" (UID: \"0a1bb7ec26ffe761dc18786603dbfbcd\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:27.791982 kubelet[3363]: I0130 13:24:25.122504 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5192eb7cd254f46d6746d18119849980-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-4db8cd7df2\" (UID: \"5192eb7cd254f46d6746d18119849980\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:27.791982 kubelet[3363]: I0130 13:24:25.122532 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5192eb7cd254f46d6746d18119849980-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-4db8cd7df2\" (UID: \"5192eb7cd254f46d6746d18119849980\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:27.791982 kubelet[3363]: I0130 13:24:25.122555 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c534504db9989cbb7ca8d5dd15637db1-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-4db8cd7df2\" (UID: \"c534504db9989cbb7ca8d5dd15637db1\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:27.791982 kubelet[3363]: I0130 13:24:25.122570 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c534504db9989cbb7ca8d5dd15637db1-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-4db8cd7df2\" (UID: \"c534504db9989cbb7ca8d5dd15637db1\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:27.791982 kubelet[3363]: I0130 13:24:25.122591 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c534504db9989cbb7ca8d5dd15637db1-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-4db8cd7df2\" (UID: \"c534504db9989cbb7ca8d5dd15637db1\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:27.791982 kubelet[3363]: I0130 13:24:25.884813 3363 apiserver.go:52] "Watching apiserver" Jan 30 13:24:27.792126 kubelet[3363]: I0130 13:24:25.921402 3363 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:24:27.792126 kubelet[3363]: W0130 13:24:25.984419 3363 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:24:27.792126 kubelet[3363]: E0130 13:24:25.984477 3363 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-a-4db8cd7df2\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:27.792126 kubelet[3363]: I0130 13:24:25.997945 3363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.0-a-4db8cd7df2" podStartSLOduration=1.9979253959999999 podStartE2EDuration="1.997925396s" podCreationTimestamp="2025-01-30 13:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:24:25.984337338 +0000 UTC m=+1.169267196" watchObservedRunningTime="2025-01-30 13:24:25.997925396 +0000 UTC m=+1.182855214" Jan 30 13:24:27.792126 kubelet[3363]: I0130 13:24:26.011368 3363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.0-a-4db8cd7df2" podStartSLOduration=2.011349853 podStartE2EDuration="2.011349853s" podCreationTimestamp="2025-01-30 13:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:24:25.999925359 +0000 UTC m=+1.184855177" watchObservedRunningTime="2025-01-30 13:24:26.011349853 +0000 UTC m=+1.196279671" Jan 30 13:24:27.792269 kubelet[3363]: I0130 13:24:26.011522 3363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-4db8cd7df2" podStartSLOduration=4.011516934 podStartE2EDuration="4.011516934s" podCreationTimestamp="2025-01-30 13:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:24:26.010906373 +0000 UTC m=+1.195836191" watchObservedRunningTime="2025-01-30 13:24:26.011516934 +0000 UTC m=+1.196446752" Jan 30 13:24:27.793754 kubelet[3363]: I0130 13:24:27.792886 3363 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186.1.0-a-4db8cd7df2" Jan 30 13:24:27.804892 sudo[3396]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 13:24:27.805670 sudo[3396]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 13:24:28.246709 sudo[3396]: pam_unix(sudo:session): session closed for user root Jan 30 13:24:30.602296 kubelet[3363]: I0130 13:24:30.602264 3363 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:24:31.302394 kubelet[3363]: I0130 13:24:30.602785 3363 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:24:31.302478 containerd[1738]: time="2025-01-30T13:24:30.602633377Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:24:31.551130 systemd[1]: Created slice kubepods-besteffort-podedaac799_a2e9_4c8c_9e15_4293cc1a37cb.slice - libcontainer container kubepods-besteffort-podedaac799_a2e9_4c8c_9e15_4293cc1a37cb.slice. Jan 30 13:24:31.558392 kubelet[3363]: I0130 13:24:31.557176 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxz6n\" (UniqueName: \"kubernetes.io/projected/edaac799-a2e9-4c8c-9e15-4293cc1a37cb-kube-api-access-gxz6n\") pod \"kube-proxy-s4p9d\" (UID: \"edaac799-a2e9-4c8c-9e15-4293cc1a37cb\") " pod="kube-system/kube-proxy-s4p9d" Jan 30 13:24:31.558392 kubelet[3363]: I0130 13:24:31.557307 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/edaac799-a2e9-4c8c-9e15-4293cc1a37cb-kube-proxy\") pod \"kube-proxy-s4p9d\" (UID: \"edaac799-a2e9-4c8c-9e15-4293cc1a37cb\") " pod="kube-system/kube-proxy-s4p9d" Jan 30 13:24:31.558392 kubelet[3363]: I0130 13:24:31.557329 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edaac799-a2e9-4c8c-9e15-4293cc1a37cb-xtables-lock\") pod \"kube-proxy-s4p9d\" (UID: \"edaac799-a2e9-4c8c-9e15-4293cc1a37cb\") " pod="kube-system/kube-proxy-s4p9d" Jan 30 13:24:31.558392 kubelet[3363]: I0130 13:24:31.557346 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edaac799-a2e9-4c8c-9e15-4293cc1a37cb-lib-modules\") pod \"kube-proxy-s4p9d\" (UID: \"edaac799-a2e9-4c8c-9e15-4293cc1a37cb\") " pod="kube-system/kube-proxy-s4p9d" Jan 30 13:24:31.580059 systemd[1]: Created slice kubepods-burstable-pod3e54aae8_0a7a_4dfb_b646_da9358b0a0ce.slice - libcontainer container kubepods-burstable-pod3e54aae8_0a7a_4dfb_b646_da9358b0a0ce.slice. Jan 30 13:24:31.658519 kubelet[3363]: I0130 13:24:31.658464 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-cilium-config-path\") pod \"cilium-6bxdc\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " pod="kube-system/cilium-6bxdc" Jan 30 13:24:31.658519 kubelet[3363]: I0130 13:24:31.658522 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-bpf-maps\") pod \"cilium-6bxdc\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " pod="kube-system/cilium-6bxdc" Jan 30 13:24:31.658919 kubelet[3363]: I0130 13:24:31.658543 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-etc-cni-netd\") pod \"cilium-6bxdc\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " pod="kube-system/cilium-6bxdc" Jan 30 13:24:31.658919 kubelet[3363]: I0130 13:24:31.658590 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-lib-modules\") pod \"cilium-6bxdc\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " pod="kube-system/cilium-6bxdc" Jan 30 13:24:31.658919 kubelet[3363]: I0130 13:24:31.658606 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-clustermesh-secrets\") pod \"cilium-6bxdc\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " pod="kube-system/cilium-6bxdc" Jan 30 13:24:31.658919 kubelet[3363]: I0130 13:24:31.658626 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-cilium-cgroup\") pod \"cilium-6bxdc\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " pod="kube-system/cilium-6bxdc" Jan 30 13:24:31.658919 kubelet[3363]: I0130 13:24:31.658640 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-cni-path\") pod \"cilium-6bxdc\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " pod="kube-system/cilium-6bxdc" Jan 30 13:24:31.658919 kubelet[3363]: I0130 13:24:31.658654 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-xtables-lock\") pod \"cilium-6bxdc\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " pod="kube-system/cilium-6bxdc" Jan 30 13:24:31.659065 kubelet[3363]: I0130 13:24:31.658668 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-cilium-run\") pod \"cilium-6bxdc\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " pod="kube-system/cilium-6bxdc" Jan 30 13:24:31.659065 kubelet[3363]: I0130 13:24:31.658682 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56fvv\" (UniqueName: \"kubernetes.io/projected/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-kube-api-access-56fvv\") pod \"cilium-6bxdc\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " pod="kube-system/cilium-6bxdc" Jan 30 13:24:31.659065 kubelet[3363]: I0130 13:24:31.658707 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-hostproc\") pod \"cilium-6bxdc\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " pod="kube-system/cilium-6bxdc" Jan 30 13:24:31.659065 kubelet[3363]: I0130 13:24:31.658721 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-host-proc-sys-net\") pod \"cilium-6bxdc\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " pod="kube-system/cilium-6bxdc" Jan 30 13:24:31.659065 kubelet[3363]: I0130 13:24:31.658737 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-host-proc-sys-kernel\") pod \"cilium-6bxdc\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " pod="kube-system/cilium-6bxdc" Jan 30 13:24:31.659065 kubelet[3363]: I0130 13:24:31.658754 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-hubble-tls\") pod \"cilium-6bxdc\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " pod="kube-system/cilium-6bxdc" Jan 30 13:24:31.733799 systemd[1]: Created slice kubepods-besteffort-poddf0a0c39_87c3_42ba_aec0_f4a71b21a71b.slice - libcontainer container kubepods-besteffort-poddf0a0c39_87c3_42ba_aec0_f4a71b21a71b.slice. Jan 30 13:24:31.759283 kubelet[3363]: I0130 13:24:31.759231 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df0a0c39-87c3-42ba-aec0-f4a71b21a71b-cilium-config-path\") pod \"cilium-operator-5d85765b45-7pqtp\" (UID: \"df0a0c39-87c3-42ba-aec0-f4a71b21a71b\") " pod="kube-system/cilium-operator-5d85765b45-7pqtp" Jan 30 13:24:31.759490 kubelet[3363]: I0130 13:24:31.759330 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhsx2\" (UniqueName: \"kubernetes.io/projected/df0a0c39-87c3-42ba-aec0-f4a71b21a71b-kube-api-access-lhsx2\") pod \"cilium-operator-5d85765b45-7pqtp\" (UID: \"df0a0c39-87c3-42ba-aec0-f4a71b21a71b\") " pod="kube-system/cilium-operator-5d85765b45-7pqtp" Jan 30 13:24:31.811233 sudo[2254]: pam_unix(sudo:session): session closed for user root Jan 30 13:24:31.863860 containerd[1738]: time="2025-01-30T13:24:31.863555075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s4p9d,Uid:edaac799-a2e9-4c8c-9e15-4293cc1a37cb,Namespace:kube-system,Attempt:0,}" Jan 30 13:24:31.884807 containerd[1738]: time="2025-01-30T13:24:31.884535660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6bxdc,Uid:3e54aae8-0a7a-4dfb-b646-da9358b0a0ce,Namespace:kube-system,Attempt:0,}" Jan 30 13:24:31.889419 sshd[2253]: Connection closed by 10.200.16.10 port 38222 Jan 30 13:24:31.889956 sshd-session[2251]: pam_unix(sshd:session): session closed for user core Jan 30 13:24:31.892923 systemd-logind[1699]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:24:31.893199 systemd[1]: sshd@6-10.200.20.42:22-10.200.16.10:38222.service: Deactivated successfully. Jan 30 13:24:31.895461 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:24:31.896276 systemd[1]: session-9.scope: Consumed 6.933s CPU time, 156.4M memory peak, 0B memory swap peak. Jan 30 13:24:31.898266 systemd-logind[1699]: Removed session 9. Jan 30 13:24:32.039254 containerd[1738]: time="2025-01-30T13:24:32.039210689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7pqtp,Uid:df0a0c39-87c3-42ba-aec0-f4a71b21a71b,Namespace:kube-system,Attempt:0,}" Jan 30 13:24:34.059272 containerd[1738]: time="2025-01-30T13:24:34.058898072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:24:34.059272 containerd[1738]: time="2025-01-30T13:24:34.058959312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:24:34.059272 containerd[1738]: time="2025-01-30T13:24:34.058981192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:24:34.059272 containerd[1738]: time="2025-01-30T13:24:34.059064232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:24:34.080332 systemd[1]: Started cri-containerd-c8927b4a115a45e459d1bd5af72c88eb027dfdf2536176b2c5926bf37ebe2e68.scope - libcontainer container c8927b4a115a45e459d1bd5af72c88eb027dfdf2536176b2c5926bf37ebe2e68. Jan 30 13:24:34.101070 containerd[1738]: time="2025-01-30T13:24:34.101030003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s4p9d,Uid:edaac799-a2e9-4c8c-9e15-4293cc1a37cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8927b4a115a45e459d1bd5af72c88eb027dfdf2536176b2c5926bf37ebe2e68\"" Jan 30 13:24:34.105638 containerd[1738]: time="2025-01-30T13:24:34.105582209Z" level=info msg="CreateContainer within sandbox \"c8927b4a115a45e459d1bd5af72c88eb027dfdf2536176b2c5926bf37ebe2e68\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:24:34.156031 containerd[1738]: time="2025-01-30T13:24:34.155771990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:24:34.156031 containerd[1738]: time="2025-01-30T13:24:34.155873510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:24:34.156031 containerd[1738]: time="2025-01-30T13:24:34.155890470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:24:34.156340 containerd[1738]: time="2025-01-30T13:24:34.156164511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:24:34.174343 systemd[1]: Started cri-containerd-23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63.scope - libcontainer container 23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63. Jan 30 13:24:34.197802 containerd[1738]: time="2025-01-30T13:24:34.197694801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6bxdc,Uid:3e54aae8-0a7a-4dfb-b646-da9358b0a0ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\"" Jan 30 13:24:34.201566 containerd[1738]: time="2025-01-30T13:24:34.201361046Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:24:34.367941 containerd[1738]: time="2025-01-30T13:24:34.367461208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:24:34.367941 containerd[1738]: time="2025-01-30T13:24:34.367515288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:24:34.367941 containerd[1738]: time="2025-01-30T13:24:34.367530568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:24:34.367941 containerd[1738]: time="2025-01-30T13:24:34.367603169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:24:34.386364 systemd[1]: Started cri-containerd-1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36.scope - libcontainer container 1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36. Jan 30 13:24:34.414548 containerd[1738]: time="2025-01-30T13:24:34.414411666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7pqtp,Uid:df0a0c39-87c3-42ba-aec0-f4a71b21a71b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36\"" Jan 30 13:24:35.254282 containerd[1738]: time="2025-01-30T13:24:35.254189415Z" level=info msg="CreateContainer within sandbox \"c8927b4a115a45e459d1bd5af72c88eb027dfdf2536176b2c5926bf37ebe2e68\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"91cc128c8acc3fba1b01bd5de61a05eac4bc1e741d226295006da32b895ac5e1\"" Jan 30 13:24:35.255871 containerd[1738]: time="2025-01-30T13:24:35.255362496Z" level=info msg="StartContainer for \"91cc128c8acc3fba1b01bd5de61a05eac4bc1e741d226295006da32b895ac5e1\"" Jan 30 13:24:35.286291 systemd[1]: Started cri-containerd-91cc128c8acc3fba1b01bd5de61a05eac4bc1e741d226295006da32b895ac5e1.scope - libcontainer container 91cc128c8acc3fba1b01bd5de61a05eac4bc1e741d226295006da32b895ac5e1. Jan 30 13:24:35.314924 containerd[1738]: time="2025-01-30T13:24:35.314770633Z" level=info msg="StartContainer for \"91cc128c8acc3fba1b01bd5de61a05eac4bc1e741d226295006da32b895ac5e1\" returns successfully" Jan 30 13:24:42.119546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3502041169.mount: Deactivated successfully. Jan 30 13:24:43.807182 containerd[1738]: time="2025-01-30T13:24:43.807103785Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:43.814248 containerd[1738]: time="2025-01-30T13:24:43.814175594Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 13:24:43.818930 containerd[1738]: time="2025-01-30T13:24:43.818858160Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:43.820461 containerd[1738]: time="2025-01-30T13:24:43.820424922Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.619020836s" Jan 30 13:24:43.820745 containerd[1738]: time="2025-01-30T13:24:43.820616282Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 13:24:43.822385 containerd[1738]: time="2025-01-30T13:24:43.821785003Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:24:43.823794 containerd[1738]: time="2025-01-30T13:24:43.823763206Z" level=info msg="CreateContainer within sandbox \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:24:43.858833 containerd[1738]: time="2025-01-30T13:24:43.858789331Z" level=info msg="CreateContainer within sandbox \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"24f5f91e667d7ca2e7b37ecfc5205de0c458d69b2285afccbc2744b003178f4d\"" Jan 30 13:24:43.859701 containerd[1738]: time="2025-01-30T13:24:43.859589932Z" level=info msg="StartContainer for \"24f5f91e667d7ca2e7b37ecfc5205de0c458d69b2285afccbc2744b003178f4d\"" Jan 30 13:24:43.894326 systemd[1]: Started cri-containerd-24f5f91e667d7ca2e7b37ecfc5205de0c458d69b2285afccbc2744b003178f4d.scope - libcontainer container 24f5f91e667d7ca2e7b37ecfc5205de0c458d69b2285afccbc2744b003178f4d. Jan 30 13:24:43.919628 containerd[1738]: time="2025-01-30T13:24:43.919577168Z" level=info msg="StartContainer for \"24f5f91e667d7ca2e7b37ecfc5205de0c458d69b2285afccbc2744b003178f4d\" returns successfully" Jan 30 13:24:43.928887 systemd[1]: cri-containerd-24f5f91e667d7ca2e7b37ecfc5205de0c458d69b2285afccbc2744b003178f4d.scope: Deactivated successfully. Jan 30 13:24:43.951609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24f5f91e667d7ca2e7b37ecfc5205de0c458d69b2285afccbc2744b003178f4d-rootfs.mount: Deactivated successfully. Jan 30 13:24:44.017838 kubelet[3363]: I0130 13:24:44.016709 3363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s4p9d" podStartSLOduration=13.016694093 podStartE2EDuration="13.016694093s" podCreationTimestamp="2025-01-30 13:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:24:35.991463962 +0000 UTC m=+11.176393780" watchObservedRunningTime="2025-01-30 13:24:44.016694093 +0000 UTC m=+19.201623911" Jan 30 13:24:45.614711 containerd[1738]: time="2025-01-30T13:24:45.614455095Z" level=info msg="shim disconnected" id=24f5f91e667d7ca2e7b37ecfc5205de0c458d69b2285afccbc2744b003178f4d namespace=k8s.io Jan 30 13:24:45.614711 containerd[1738]: time="2025-01-30T13:24:45.614510135Z" level=warning msg="cleaning up after shim disconnected" id=24f5f91e667d7ca2e7b37ecfc5205de0c458d69b2285afccbc2744b003178f4d namespace=k8s.io Jan 30 13:24:45.614711 containerd[1738]: time="2025-01-30T13:24:45.614518855Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:24:46.006860 containerd[1738]: time="2025-01-30T13:24:46.005850755Z" level=info msg="CreateContainer within sandbox \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:24:46.044237 containerd[1738]: time="2025-01-30T13:24:46.043614403Z" level=info msg="CreateContainer within sandbox \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e1be7672f875cb02383f35ad8cb35f8dca3f22ee8e63be6477c2b2fb6bc928ff\"" Jan 30 13:24:46.044416 containerd[1738]: time="2025-01-30T13:24:46.044387004Z" level=info msg="StartContainer for \"e1be7672f875cb02383f35ad8cb35f8dca3f22ee8e63be6477c2b2fb6bc928ff\"" Jan 30 13:24:46.101327 systemd[1]: Started cri-containerd-e1be7672f875cb02383f35ad8cb35f8dca3f22ee8e63be6477c2b2fb6bc928ff.scope - libcontainer container e1be7672f875cb02383f35ad8cb35f8dca3f22ee8e63be6477c2b2fb6bc928ff. Jan 30 13:24:46.132328 containerd[1738]: time="2025-01-30T13:24:46.132019716Z" level=info msg="StartContainer for \"e1be7672f875cb02383f35ad8cb35f8dca3f22ee8e63be6477c2b2fb6bc928ff\" returns successfully" Jan 30 13:24:46.142308 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:24:46.143298 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:24:46.143376 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:24:46.150572 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:24:46.150759 systemd[1]: cri-containerd-e1be7672f875cb02383f35ad8cb35f8dca3f22ee8e63be6477c2b2fb6bc928ff.scope: Deactivated successfully. Jan 30 13:24:46.172173 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:24:46.179405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1be7672f875cb02383f35ad8cb35f8dca3f22ee8e63be6477c2b2fb6bc928ff-rootfs.mount: Deactivated successfully. Jan 30 13:24:46.190742 containerd[1738]: time="2025-01-30T13:24:46.190525351Z" level=info msg="shim disconnected" id=e1be7672f875cb02383f35ad8cb35f8dca3f22ee8e63be6477c2b2fb6bc928ff namespace=k8s.io Jan 30 13:24:46.190742 containerd[1738]: time="2025-01-30T13:24:46.190579391Z" level=warning msg="cleaning up after shim disconnected" id=e1be7672f875cb02383f35ad8cb35f8dca3f22ee8e63be6477c2b2fb6bc928ff namespace=k8s.io Jan 30 13:24:46.190742 containerd[1738]: time="2025-01-30T13:24:46.190587311Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:24:46.201377 containerd[1738]: time="2025-01-30T13:24:46.201321605Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:24:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:24:47.010285 containerd[1738]: time="2025-01-30T13:24:47.010120958Z" level=info msg="CreateContainer within sandbox \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:24:47.032065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2901701586.mount: Deactivated successfully. Jan 30 13:24:47.067723 containerd[1738]: time="2025-01-30T13:24:47.067655112Z" level=info msg="CreateContainer within sandbox \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ad12a73c3fc372aa3ae23e07c5fedb351220e3134168b7442a3081afba069ef0\"" Jan 30 13:24:47.070976 containerd[1738]: time="2025-01-30T13:24:47.068936593Z" level=info msg="StartContainer for \"ad12a73c3fc372aa3ae23e07c5fedb351220e3134168b7442a3081afba069ef0\"" Jan 30 13:24:47.104664 systemd[1]: Started cri-containerd-ad12a73c3fc372aa3ae23e07c5fedb351220e3134168b7442a3081afba069ef0.scope - libcontainer container ad12a73c3fc372aa3ae23e07c5fedb351220e3134168b7442a3081afba069ef0. Jan 30 13:24:47.137434 systemd[1]: cri-containerd-ad12a73c3fc372aa3ae23e07c5fedb351220e3134168b7442a3081afba069ef0.scope: Deactivated successfully. Jan 30 13:24:47.144633 containerd[1738]: time="2025-01-30T13:24:47.144592730Z" level=info msg="StartContainer for \"ad12a73c3fc372aa3ae23e07c5fedb351220e3134168b7442a3081afba069ef0\" returns successfully" Jan 30 13:24:47.177207 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad12a73c3fc372aa3ae23e07c5fedb351220e3134168b7442a3081afba069ef0-rootfs.mount: Deactivated successfully. Jan 30 13:24:47.209317 containerd[1738]: time="2025-01-30T13:24:47.209095653Z" level=info msg="shim disconnected" id=ad12a73c3fc372aa3ae23e07c5fedb351220e3134168b7442a3081afba069ef0 namespace=k8s.io Jan 30 13:24:47.209317 containerd[1738]: time="2025-01-30T13:24:47.209275333Z" level=warning msg="cleaning up after shim disconnected" id=ad12a73c3fc372aa3ae23e07c5fedb351220e3134168b7442a3081afba069ef0 namespace=k8s.io Jan 30 13:24:47.209317 containerd[1738]: time="2025-01-30T13:24:47.209284733Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:24:47.222019 containerd[1738]: time="2025-01-30T13:24:47.220983028Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:24:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:24:47.493812 containerd[1738]: time="2025-01-30T13:24:47.493761616Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:47.496020 containerd[1738]: time="2025-01-30T13:24:47.495970659Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 13:24:47.498341 containerd[1738]: time="2025-01-30T13:24:47.498268262Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:47.500195 containerd[1738]: time="2025-01-30T13:24:47.500157385Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.678336022s" Jan 30 13:24:47.500786 containerd[1738]: time="2025-01-30T13:24:47.500318265Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 13:24:47.503634 containerd[1738]: time="2025-01-30T13:24:47.503601869Z" level=info msg="CreateContainer within sandbox \"1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:24:47.538994 containerd[1738]: time="2025-01-30T13:24:47.538944354Z" level=info msg="CreateContainer within sandbox \"1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"976128d8a5fef2d10a781f4900e355d6cf32ec487080cbcb03cd09264fdf6314\"" Jan 30 13:24:47.540436 containerd[1738]: time="2025-01-30T13:24:47.540408196Z" level=info msg="StartContainer for \"976128d8a5fef2d10a781f4900e355d6cf32ec487080cbcb03cd09264fdf6314\"" Jan 30 13:24:47.563308 systemd[1]: Started cri-containerd-976128d8a5fef2d10a781f4900e355d6cf32ec487080cbcb03cd09264fdf6314.scope - libcontainer container 976128d8a5fef2d10a781f4900e355d6cf32ec487080cbcb03cd09264fdf6314. Jan 30 13:24:47.592531 containerd[1738]: time="2025-01-30T13:24:47.592475023Z" level=info msg="StartContainer for \"976128d8a5fef2d10a781f4900e355d6cf32ec487080cbcb03cd09264fdf6314\" returns successfully" Jan 30 13:24:48.018556 containerd[1738]: time="2025-01-30T13:24:48.018515927Z" level=info msg="CreateContainer within sandbox \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:24:48.067426 containerd[1738]: time="2025-01-30T13:24:48.067355590Z" level=info msg="CreateContainer within sandbox \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8a7fd519b93bb1474497144418f67b50fdb01c67ff4d2c01da2cb014bbef0fcb\"" Jan 30 13:24:48.069553 containerd[1738]: time="2025-01-30T13:24:48.068246271Z" level=info msg="StartContainer for \"8a7fd519b93bb1474497144418f67b50fdb01c67ff4d2c01da2cb014bbef0fcb\"" Jan 30 13:24:48.114334 systemd[1]: Started cri-containerd-8a7fd519b93bb1474497144418f67b50fdb01c67ff4d2c01da2cb014bbef0fcb.scope - libcontainer container 8a7fd519b93bb1474497144418f67b50fdb01c67ff4d2c01da2cb014bbef0fcb. Jan 30 13:24:48.164310 systemd[1]: cri-containerd-8a7fd519b93bb1474497144418f67b50fdb01c67ff4d2c01da2cb014bbef0fcb.scope: Deactivated successfully. Jan 30 13:24:48.167238 containerd[1738]: time="2025-01-30T13:24:48.167076037Z" level=info msg="StartContainer for \"8a7fd519b93bb1474497144418f67b50fdb01c67ff4d2c01da2cb014bbef0fcb\" returns successfully" Jan 30 13:24:48.203534 kubelet[3363]: I0130 13:24:48.202594 3363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-7pqtp" podStartSLOduration=4.117373124 podStartE2EDuration="17.202575682s" podCreationTimestamp="2025-01-30 13:24:31 +0000 UTC" firstStartedPulling="2025-01-30 13:24:34.415979228 +0000 UTC m=+9.600909046" lastFinishedPulling="2025-01-30 13:24:47.501181786 +0000 UTC m=+22.686111604" observedRunningTime="2025-01-30 13:24:48.061128862 +0000 UTC m=+23.246058680" watchObservedRunningTime="2025-01-30 13:24:48.202575682 +0000 UTC m=+23.387505460" Jan 30 13:24:48.497072 containerd[1738]: time="2025-01-30T13:24:48.496999779Z" level=info msg="shim disconnected" id=8a7fd519b93bb1474497144418f67b50fdb01c67ff4d2c01da2cb014bbef0fcb namespace=k8s.io Jan 30 13:24:48.497072 containerd[1738]: time="2025-01-30T13:24:48.497061619Z" level=warning msg="cleaning up after shim disconnected" id=8a7fd519b93bb1474497144418f67b50fdb01c67ff4d2c01da2cb014bbef0fcb namespace=k8s.io Jan 30 13:24:48.497072 containerd[1738]: time="2025-01-30T13:24:48.497070659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:24:49.027142 containerd[1738]: time="2025-01-30T13:24:49.026763056Z" level=info msg="CreateContainer within sandbox \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:24:49.032419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a7fd519b93bb1474497144418f67b50fdb01c67ff4d2c01da2cb014bbef0fcb-rootfs.mount: Deactivated successfully. Jan 30 13:24:49.067892 containerd[1738]: time="2025-01-30T13:24:49.067726588Z" level=info msg="CreateContainer within sandbox \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f9f519dcdfa8257f8e7c9e2e50eacb0744a565b0696a29141f0717ac328d0236\"" Jan 30 13:24:49.069446 containerd[1738]: time="2025-01-30T13:24:49.069296190Z" level=info msg="StartContainer for \"f9f519dcdfa8257f8e7c9e2e50eacb0744a565b0696a29141f0717ac328d0236\"" Jan 30 13:24:49.101299 systemd[1]: Started cri-containerd-f9f519dcdfa8257f8e7c9e2e50eacb0744a565b0696a29141f0717ac328d0236.scope - libcontainer container f9f519dcdfa8257f8e7c9e2e50eacb0744a565b0696a29141f0717ac328d0236. Jan 30 13:24:49.131680 containerd[1738]: time="2025-01-30T13:24:49.131560030Z" level=info msg="StartContainer for \"f9f519dcdfa8257f8e7c9e2e50eacb0744a565b0696a29141f0717ac328d0236\" returns successfully" Jan 30 13:24:49.251351 kubelet[3363]: I0130 13:24:49.251148 3363 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 13:24:49.303901 systemd[1]: Created slice kubepods-burstable-pod39daba86_2b5f_4825_88bb_740a88fed604.slice - libcontainer container kubepods-burstable-pod39daba86_2b5f_4825_88bb_740a88fed604.slice. Jan 30 13:24:49.319372 systemd[1]: Created slice kubepods-burstable-poddd9ea06e_c187_4af4_9ae5_9a32c39b5b1e.slice - libcontainer container kubepods-burstable-poddd9ea06e_c187_4af4_9ae5_9a32c39b5b1e.slice. Jan 30 13:24:49.466609 kubelet[3363]: I0130 13:24:49.466573 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39daba86-2b5f-4825-88bb-740a88fed604-config-volume\") pod \"coredns-6f6b679f8f-c4lpx\" (UID: \"39daba86-2b5f-4825-88bb-740a88fed604\") " pod="kube-system/coredns-6f6b679f8f-c4lpx" Jan 30 13:24:49.466609 kubelet[3363]: I0130 13:24:49.466614 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc6dd\" (UniqueName: \"kubernetes.io/projected/39daba86-2b5f-4825-88bb-740a88fed604-kube-api-access-nc6dd\") pod \"coredns-6f6b679f8f-c4lpx\" (UID: \"39daba86-2b5f-4825-88bb-740a88fed604\") " pod="kube-system/coredns-6f6b679f8f-c4lpx" Jan 30 13:24:49.466892 kubelet[3363]: I0130 13:24:49.466635 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd9ea06e-c187-4af4-9ae5-9a32c39b5b1e-config-volume\") pod \"coredns-6f6b679f8f-b6nfh\" (UID: \"dd9ea06e-c187-4af4-9ae5-9a32c39b5b1e\") " pod="kube-system/coredns-6f6b679f8f-b6nfh" Jan 30 13:24:49.466892 kubelet[3363]: I0130 13:24:49.466652 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bllvg\" (UniqueName: \"kubernetes.io/projected/dd9ea06e-c187-4af4-9ae5-9a32c39b5b1e-kube-api-access-bllvg\") pod \"coredns-6f6b679f8f-b6nfh\" (UID: \"dd9ea06e-c187-4af4-9ae5-9a32c39b5b1e\") " pod="kube-system/coredns-6f6b679f8f-b6nfh" Jan 30 13:24:49.610744 containerd[1738]: time="2025-01-30T13:24:49.610178961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c4lpx,Uid:39daba86-2b5f-4825-88bb-740a88fed604,Namespace:kube-system,Attempt:0,}" Jan 30 13:24:49.627122 containerd[1738]: time="2025-01-30T13:24:49.626716022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b6nfh,Uid:dd9ea06e-c187-4af4-9ae5-9a32c39b5b1e,Namespace:kube-system,Attempt:0,}" Jan 30 13:24:51.302628 systemd-networkd[1535]: cilium_host: Link UP Jan 30 13:24:51.302991 systemd-networkd[1535]: cilium_net: Link UP Jan 30 13:24:51.303002 systemd-networkd[1535]: cilium_net: Gained carrier Jan 30 13:24:51.303255 systemd-networkd[1535]: cilium_host: Gained carrier Jan 30 13:24:51.306277 systemd-networkd[1535]: cilium_net: Gained IPv6LL Jan 30 13:24:51.391230 systemd-networkd[1535]: cilium_host: Gained IPv6LL Jan 30 13:24:51.474177 systemd-networkd[1535]: cilium_vxlan: Link UP Jan 30 13:24:51.474185 systemd-networkd[1535]: cilium_vxlan: Gained carrier Jan 30 13:24:51.802219 kernel: NET: Registered PF_ALG protocol family Jan 30 13:24:52.639966 systemd-networkd[1535]: lxc_health: Link UP Jan 30 13:24:52.647413 systemd-networkd[1535]: lxc_health: Gained carrier Jan 30 13:24:52.808252 systemd-networkd[1535]: cilium_vxlan: Gained IPv6LL Jan 30 13:24:53.178263 systemd-networkd[1535]: lxcef39ae77febd: Link UP Jan 30 13:24:53.185210 kernel: eth0: renamed from tmp3e1bb Jan 30 13:24:53.191461 systemd-networkd[1535]: lxcef39ae77febd: Gained carrier Jan 30 13:24:53.208785 systemd-networkd[1535]: lxcbe78f838b223: Link UP Jan 30 13:24:53.224156 kernel: eth0: renamed from tmp23cf3 Jan 30 13:24:53.233274 systemd-networkd[1535]: lxcbe78f838b223: Gained carrier Jan 30 13:24:53.910104 kubelet[3363]: I0130 13:24:53.909660 3363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6bxdc" podStartSLOduration=13.287660115 podStartE2EDuration="22.909643714s" podCreationTimestamp="2025-01-30 13:24:31 +0000 UTC" firstStartedPulling="2025-01-30 13:24:34.199585604 +0000 UTC m=+9.384515422" lastFinishedPulling="2025-01-30 13:24:43.821569203 +0000 UTC m=+19.006499021" observedRunningTime="2025-01-30 13:24:50.053809728 +0000 UTC m=+25.238739546" watchObservedRunningTime="2025-01-30 13:24:53.909643714 +0000 UTC m=+29.094573532" Jan 30 13:24:54.279652 systemd-networkd[1535]: lxcbe78f838b223: Gained IPv6LL Jan 30 13:24:54.536226 systemd-networkd[1535]: lxcef39ae77febd: Gained IPv6LL Jan 30 13:24:54.599269 systemd-networkd[1535]: lxc_health: Gained IPv6LL Jan 30 13:24:56.889422 containerd[1738]: time="2025-01-30T13:24:56.889322697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:24:56.890425 containerd[1738]: time="2025-01-30T13:24:56.890215019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:24:56.890425 containerd[1738]: time="2025-01-30T13:24:56.890298899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:24:56.890641 containerd[1738]: time="2025-01-30T13:24:56.890571179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:24:56.898268 containerd[1738]: time="2025-01-30T13:24:56.897022467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:24:56.898268 containerd[1738]: time="2025-01-30T13:24:56.897122827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:24:56.898268 containerd[1738]: time="2025-01-30T13:24:56.897139347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:24:56.898268 containerd[1738]: time="2025-01-30T13:24:56.897565308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:24:56.930352 systemd[1]: Started cri-containerd-23cf354c9c13c7c48e71bcfe62a002c46da3f76f32d45bbdf489922cc52ea34a.scope - libcontainer container 23cf354c9c13c7c48e71bcfe62a002c46da3f76f32d45bbdf489922cc52ea34a. Jan 30 13:24:56.932768 systemd[1]: Started cri-containerd-3e1bb1c5881c8c3b40f6fe2aacda9a2c053f2d7b71d83cccf144a3fede5264c4.scope - libcontainer container 3e1bb1c5881c8c3b40f6fe2aacda9a2c053f2d7b71d83cccf144a3fede5264c4. Jan 30 13:24:56.976586 containerd[1738]: time="2025-01-30T13:24:56.976536608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b6nfh,Uid:dd9ea06e-c187-4af4-9ae5-9a32c39b5b1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"23cf354c9c13c7c48e71bcfe62a002c46da3f76f32d45bbdf489922cc52ea34a\"" Jan 30 13:24:56.980624 containerd[1738]: time="2025-01-30T13:24:56.980585853Z" level=info msg="CreateContainer within sandbox \"23cf354c9c13c7c48e71bcfe62a002c46da3f76f32d45bbdf489922cc52ea34a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:24:57.005368 containerd[1738]: time="2025-01-30T13:24:57.004727724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c4lpx,Uid:39daba86-2b5f-4825-88bb-740a88fed604,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e1bb1c5881c8c3b40f6fe2aacda9a2c053f2d7b71d83cccf144a3fede5264c4\"" Jan 30 13:24:57.009401 containerd[1738]: time="2025-01-30T13:24:57.009331970Z" level=info msg="CreateContainer within sandbox \"3e1bb1c5881c8c3b40f6fe2aacda9a2c053f2d7b71d83cccf144a3fede5264c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:24:57.030511 containerd[1738]: time="2025-01-30T13:24:57.030455477Z" level=info msg="CreateContainer within sandbox \"23cf354c9c13c7c48e71bcfe62a002c46da3f76f32d45bbdf489922cc52ea34a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6c9d525bce2b3df9af317eb9b3ca1e85d463a13354713b95c9e373907990db8\"" Jan 30 13:24:57.031833 containerd[1738]: time="2025-01-30T13:24:57.031796958Z" level=info msg="StartContainer for \"b6c9d525bce2b3df9af317eb9b3ca1e85d463a13354713b95c9e373907990db8\"" Jan 30 13:24:57.064399 containerd[1738]: time="2025-01-30T13:24:57.063820719Z" level=info msg="CreateContainer within sandbox \"3e1bb1c5881c8c3b40f6fe2aacda9a2c053f2d7b71d83cccf144a3fede5264c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"13809e786a5bcf256b2534868750183ebac998474789cbff2203f4fceb87a568\"" Jan 30 13:24:57.064588 containerd[1738]: time="2025-01-30T13:24:57.064469200Z" level=info msg="StartContainer for \"13809e786a5bcf256b2534868750183ebac998474789cbff2203f4fceb87a568\"" Jan 30 13:24:57.067338 systemd[1]: Started cri-containerd-b6c9d525bce2b3df9af317eb9b3ca1e85d463a13354713b95c9e373907990db8.scope - libcontainer container b6c9d525bce2b3df9af317eb9b3ca1e85d463a13354713b95c9e373907990db8. Jan 30 13:24:57.104312 systemd[1]: Started cri-containerd-13809e786a5bcf256b2534868750183ebac998474789cbff2203f4fceb87a568.scope - libcontainer container 13809e786a5bcf256b2534868750183ebac998474789cbff2203f4fceb87a568. Jan 30 13:24:57.119199 containerd[1738]: time="2025-01-30T13:24:57.119046229Z" level=info msg="StartContainer for \"b6c9d525bce2b3df9af317eb9b3ca1e85d463a13354713b95c9e373907990db8\" returns successfully" Jan 30 13:24:57.151087 containerd[1738]: time="2025-01-30T13:24:57.149819028Z" level=info msg="StartContainer for \"13809e786a5bcf256b2534868750183ebac998474789cbff2203f4fceb87a568\" returns successfully" Jan 30 13:24:57.899227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount692046423.mount: Deactivated successfully. Jan 30 13:24:58.070409 kubelet[3363]: I0130 13:24:58.069839 3363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-b6nfh" podStartSLOduration=27.069820636 podStartE2EDuration="27.069820636s" podCreationTimestamp="2025-01-30 13:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:24:58.068058754 +0000 UTC m=+33.252988572" watchObservedRunningTime="2025-01-30 13:24:58.069820636 +0000 UTC m=+33.254750414" Jan 30 13:24:58.106291 kubelet[3363]: I0130 13:24:58.106226 3363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-c4lpx" podStartSLOduration=27.106210603 podStartE2EDuration="27.106210603s" podCreationTimestamp="2025-01-30 13:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:24:58.085545896 +0000 UTC m=+33.270475714" watchObservedRunningTime="2025-01-30 13:24:58.106210603 +0000 UTC m=+33.291140381" Jan 30 13:24:58.549822 kubelet[3363]: I0130 13:24:58.549635 3363 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:26:21.634266 systemd[1]: Started sshd@7-10.200.20.42:22-10.200.16.10:41760.service - OpenSSH per-connection server daemon (10.200.16.10:41760). Jan 30 13:26:22.088057 sshd[4752]: Accepted publickey for core from 10.200.16.10 port 41760 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:26:22.090128 sshd-session[4752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:26:22.093866 systemd-logind[1699]: New session 10 of user core. Jan 30 13:26:22.099283 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:26:22.490154 sshd[4754]: Connection closed by 10.200.16.10 port 41760 Jan 30 13:26:22.490824 sshd-session[4752]: pam_unix(sshd:session): session closed for user core Jan 30 13:26:22.494144 systemd[1]: sshd@7-10.200.20.42:22-10.200.16.10:41760.service: Deactivated successfully. Jan 30 13:26:22.495641 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:26:22.496609 systemd-logind[1699]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:26:22.497765 systemd-logind[1699]: Removed session 10. Jan 30 13:26:27.577262 systemd[1]: Started sshd@8-10.200.20.42:22-10.200.16.10:33046.service - OpenSSH per-connection server daemon (10.200.16.10:33046). Jan 30 13:26:28.007474 sshd[4767]: Accepted publickey for core from 10.200.16.10 port 33046 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:26:28.008744 sshd-session[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:26:28.013887 systemd-logind[1699]: New session 11 of user core. Jan 30 13:26:28.025310 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:26:28.394644 sshd[4769]: Connection closed by 10.200.16.10 port 33046 Jan 30 13:26:28.395259 sshd-session[4767]: pam_unix(sshd:session): session closed for user core Jan 30 13:26:28.398710 systemd-logind[1699]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:26:28.399769 systemd[1]: sshd@8-10.200.20.42:22-10.200.16.10:33046.service: Deactivated successfully. Jan 30 13:26:28.401672 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:26:28.402775 systemd-logind[1699]: Removed session 11. Jan 30 13:26:33.481376 systemd[1]: Started sshd@9-10.200.20.42:22-10.200.16.10:33052.service - OpenSSH per-connection server daemon (10.200.16.10:33052). Jan 30 13:26:33.908023 sshd[4780]: Accepted publickey for core from 10.200.16.10 port 33052 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:26:33.909380 sshd-session[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:26:33.914147 systemd-logind[1699]: New session 12 of user core. Jan 30 13:26:33.921458 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:26:34.308403 sshd[4782]: Connection closed by 10.200.16.10 port 33052 Jan 30 13:26:34.308961 sshd-session[4780]: pam_unix(sshd:session): session closed for user core Jan 30 13:26:34.312622 systemd[1]: sshd@9-10.200.20.42:22-10.200.16.10:33052.service: Deactivated successfully. Jan 30 13:26:34.312770 systemd-logind[1699]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:26:34.315498 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:26:34.316613 systemd-logind[1699]: Removed session 12. Jan 30 13:26:38.037215 update_engine[1703]: I20250130 13:26:38.036279 1703 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 13:26:38.037215 update_engine[1703]: I20250130 13:26:38.036343 1703 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 13:26:38.037215 update_engine[1703]: I20250130 13:26:38.036550 1703 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 13:26:38.037215 update_engine[1703]: I20250130 13:26:38.036886 1703 omaha_request_params.cc:62] Current group set to beta Jan 30 13:26:38.037215 update_engine[1703]: I20250130 13:26:38.036974 1703 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 13:26:38.037215 update_engine[1703]: I20250130 13:26:38.036983 1703 update_attempter.cc:643] Scheduling an action processor start. Jan 30 13:26:38.037215 update_engine[1703]: I20250130 13:26:38.036998 1703 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 13:26:38.037215 update_engine[1703]: I20250130 13:26:38.037022 1703 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 13:26:38.037215 update_engine[1703]: I20250130 13:26:38.037065 1703 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 13:26:38.037215 update_engine[1703]: I20250130 13:26:38.037073 1703 omaha_request_action.cc:272] Request: Jan 30 13:26:38.037215 update_engine[1703]: Jan 30 13:26:38.037215 update_engine[1703]: Jan 30 13:26:38.037215 update_engine[1703]: Jan 30 13:26:38.037215 update_engine[1703]: Jan 30 13:26:38.037215 update_engine[1703]: Jan 30 13:26:38.037215 update_engine[1703]: Jan 30 13:26:38.037215 update_engine[1703]: Jan 30 13:26:38.037215 update_engine[1703]: Jan 30 13:26:38.037215 update_engine[1703]: I20250130 13:26:38.037079 1703 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:26:38.038677 update_engine[1703]: I20250130 13:26:38.038064 1703 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:26:38.038677 update_engine[1703]: I20250130 13:26:38.038614 1703 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:26:38.038854 locksmithd[1771]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 13:26:38.121745 update_engine[1703]: E20250130 13:26:38.121642 1703 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:26:38.121967 update_engine[1703]: I20250130 13:26:38.121793 1703 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 13:26:39.389378 systemd[1]: Started sshd@10-10.200.20.42:22-10.200.16.10:42196.service - OpenSSH per-connection server daemon (10.200.16.10:42196). Jan 30 13:26:39.817821 sshd[4796]: Accepted publickey for core from 10.200.16.10 port 42196 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:26:39.819365 sshd-session[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:26:39.824846 systemd-logind[1699]: New session 13 of user core. Jan 30 13:26:39.830317 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:26:40.208646 sshd[4798]: Connection closed by 10.200.16.10 port 42196 Jan 30 13:26:40.209097 sshd-session[4796]: pam_unix(sshd:session): session closed for user core Jan 30 13:26:40.212373 systemd[1]: sshd@10-10.200.20.42:22-10.200.16.10:42196.service: Deactivated successfully. Jan 30 13:26:40.217439 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:26:40.218101 systemd-logind[1699]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:26:40.219213 systemd-logind[1699]: Removed session 13. Jan 30 13:26:40.292391 systemd[1]: Started sshd@11-10.200.20.42:22-10.200.16.10:42210.service - OpenSSH per-connection server daemon (10.200.16.10:42210). Jan 30 13:26:40.719486 sshd[4810]: Accepted publickey for core from 10.200.16.10 port 42210 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:26:40.720982 sshd-session[4810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:26:40.725605 systemd-logind[1699]: New session 14 of user core. Jan 30 13:26:40.734260 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:26:41.154702 sshd[4812]: Connection closed by 10.200.16.10 port 42210 Jan 30 13:26:41.155359 sshd-session[4810]: pam_unix(sshd:session): session closed for user core Jan 30 13:26:41.159804 systemd[1]: sshd@11-10.200.20.42:22-10.200.16.10:42210.service: Deactivated successfully. Jan 30 13:26:41.161457 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:26:41.163376 systemd-logind[1699]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:26:41.164781 systemd-logind[1699]: Removed session 14. Jan 30 13:26:41.240463 systemd[1]: Started sshd@12-10.200.20.42:22-10.200.16.10:42212.service - OpenSSH per-connection server daemon (10.200.16.10:42212). Jan 30 13:26:41.683990 sshd[4821]: Accepted publickey for core from 10.200.16.10 port 42212 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:26:41.685427 sshd-session[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:26:41.689302 systemd-logind[1699]: New session 15 of user core. Jan 30 13:26:41.696271 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:26:42.073318 sshd[4823]: Connection closed by 10.200.16.10 port 42212 Jan 30 13:26:42.073856 sshd-session[4821]: pam_unix(sshd:session): session closed for user core Jan 30 13:26:42.077158 systemd-logind[1699]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:26:42.077817 systemd[1]: sshd@12-10.200.20.42:22-10.200.16.10:42212.service: Deactivated successfully. Jan 30 13:26:42.079619 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:26:42.080896 systemd-logind[1699]: Removed session 15. Jan 30 13:26:47.151185 systemd[1]: Started sshd@13-10.200.20.42:22-10.200.16.10:43238.service - OpenSSH per-connection server daemon (10.200.16.10:43238). Jan 30 13:26:47.586561 sshd[4834]: Accepted publickey for core from 10.200.16.10 port 43238 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:26:47.587862 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:26:47.592464 systemd-logind[1699]: New session 16 of user core. Jan 30 13:26:47.596279 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:26:47.961218 sshd[4836]: Connection closed by 10.200.16.10 port 43238 Jan 30 13:26:47.961737 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Jan 30 13:26:47.965063 systemd-logind[1699]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:26:47.965754 systemd[1]: sshd@13-10.200.20.42:22-10.200.16.10:43238.service: Deactivated successfully. Jan 30 13:26:47.967938 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:26:47.969357 systemd-logind[1699]: Removed session 16. Jan 30 13:26:48.032925 update_engine[1703]: I20250130 13:26:48.032411 1703 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:26:48.032925 update_engine[1703]: I20250130 13:26:48.032662 1703 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:26:48.032925 update_engine[1703]: I20250130 13:26:48.032887 1703 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:26:48.101511 update_engine[1703]: E20250130 13:26:48.101400 1703 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:26:48.101511 update_engine[1703]: I20250130 13:26:48.101485 1703 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 30 13:26:53.039363 systemd[1]: Started sshd@14-10.200.20.42:22-10.200.16.10:43252.service - OpenSSH per-connection server daemon (10.200.16.10:43252). Jan 30 13:26:53.469094 sshd[4848]: Accepted publickey for core from 10.200.16.10 port 43252 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:26:53.470511 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:26:53.474465 systemd-logind[1699]: New session 17 of user core. Jan 30 13:26:53.482570 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:26:53.854853 sshd[4850]: Connection closed by 10.200.16.10 port 43252 Jan 30 13:26:53.855472 sshd-session[4848]: pam_unix(sshd:session): session closed for user core Jan 30 13:26:53.858623 systemd[1]: sshd@14-10.200.20.42:22-10.200.16.10:43252.service: Deactivated successfully. Jan 30 13:26:53.860938 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:26:53.861976 systemd-logind[1699]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:26:53.863304 systemd-logind[1699]: Removed session 17. Jan 30 13:26:53.936668 systemd[1]: Started sshd@15-10.200.20.42:22-10.200.16.10:43264.service - OpenSSH per-connection server daemon (10.200.16.10:43264). Jan 30 13:26:54.369882 sshd[4860]: Accepted publickey for core from 10.200.16.10 port 43264 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:26:54.371249 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:26:54.374973 systemd-logind[1699]: New session 18 of user core. Jan 30 13:26:54.382273 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:26:54.805164 sshd[4862]: Connection closed by 10.200.16.10 port 43264 Jan 30 13:26:54.805713 sshd-session[4860]: pam_unix(sshd:session): session closed for user core Jan 30 13:26:54.809202 systemd-logind[1699]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:26:54.810066 systemd[1]: sshd@15-10.200.20.42:22-10.200.16.10:43264.service: Deactivated successfully. Jan 30 13:26:54.812734 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:26:54.813788 systemd-logind[1699]: Removed session 18. Jan 30 13:26:54.899371 systemd[1]: Started sshd@16-10.200.20.42:22-10.200.16.10:43274.service - OpenSSH per-connection server daemon (10.200.16.10:43274). Jan 30 13:26:55.324515 sshd[4871]: Accepted publickey for core from 10.200.16.10 port 43274 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:26:55.325818 sshd-session[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:26:55.330712 systemd-logind[1699]: New session 19 of user core. Jan 30 13:26:55.336273 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:26:57.149378 sshd[4873]: Connection closed by 10.200.16.10 port 43274 Jan 30 13:26:57.150358 sshd-session[4871]: pam_unix(sshd:session): session closed for user core Jan 30 13:26:57.154000 systemd[1]: sshd@16-10.200.20.42:22-10.200.16.10:43274.service: Deactivated successfully. Jan 30 13:26:57.156667 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:26:57.160336 systemd-logind[1699]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:26:57.161757 systemd-logind[1699]: Removed session 19. Jan 30 13:26:57.230138 systemd[1]: Started sshd@17-10.200.20.42:22-10.200.16.10:57870.service - OpenSSH per-connection server daemon (10.200.16.10:57870). Jan 30 13:26:57.658655 sshd[4889]: Accepted publickey for core from 10.200.16.10 port 57870 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:26:57.660088 sshd-session[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:26:57.664243 systemd-logind[1699]: New session 20 of user core. Jan 30 13:26:57.673303 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:26:58.033083 update_engine[1703]: I20250130 13:26:58.033013 1703 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:26:58.033560 update_engine[1703]: I20250130 13:26:58.033526 1703 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:26:58.033804 update_engine[1703]: I20250130 13:26:58.033777 1703 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:26:58.116975 update_engine[1703]: E20250130 13:26:58.116915 1703 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:26:58.117179 update_engine[1703]: I20250130 13:26:58.117012 1703 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 30 13:26:58.158214 sshd[4891]: Connection closed by 10.200.16.10 port 57870 Jan 30 13:26:58.158579 sshd-session[4889]: pam_unix(sshd:session): session closed for user core Jan 30 13:26:58.162250 systemd[1]: sshd@17-10.200.20.42:22-10.200.16.10:57870.service: Deactivated successfully. Jan 30 13:26:58.163949 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:26:58.165704 systemd-logind[1699]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:26:58.166567 systemd-logind[1699]: Removed session 20. Jan 30 13:26:58.239389 systemd[1]: Started sshd@18-10.200.20.42:22-10.200.16.10:57874.service - OpenSSH per-connection server daemon (10.200.16.10:57874). Jan 30 13:26:58.664088 sshd[4900]: Accepted publickey for core from 10.200.16.10 port 57874 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:26:58.665412 sshd-session[4900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:26:58.669059 systemd-logind[1699]: New session 21 of user core. Jan 30 13:26:58.676336 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:26:59.033843 sshd[4902]: Connection closed by 10.200.16.10 port 57874 Jan 30 13:26:59.033340 sshd-session[4900]: pam_unix(sshd:session): session closed for user core Jan 30 13:26:59.036851 systemd-logind[1699]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:26:59.037247 systemd[1]: sshd@18-10.200.20.42:22-10.200.16.10:57874.service: Deactivated successfully. Jan 30 13:26:59.040531 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:26:59.043456 systemd-logind[1699]: Removed session 21. Jan 30 13:27:04.111958 systemd[1]: Started sshd@19-10.200.20.42:22-10.200.16.10:57880.service - OpenSSH per-connection server daemon (10.200.16.10:57880). Jan 30 13:27:04.543695 sshd[4917]: Accepted publickey for core from 10.200.16.10 port 57880 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:27:04.545064 sshd-session[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:27:04.548533 systemd-logind[1699]: New session 22 of user core. Jan 30 13:27:04.556261 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:27:04.913218 sshd[4919]: Connection closed by 10.200.16.10 port 57880 Jan 30 13:27:04.913894 sshd-session[4917]: pam_unix(sshd:session): session closed for user core Jan 30 13:27:04.917386 systemd[1]: sshd@19-10.200.20.42:22-10.200.16.10:57880.service: Deactivated successfully. Jan 30 13:27:04.919545 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:27:04.920362 systemd-logind[1699]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:27:04.921445 systemd-logind[1699]: Removed session 22. Jan 30 13:27:08.035767 update_engine[1703]: I20250130 13:27:08.035274 1703 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:27:08.035767 update_engine[1703]: I20250130 13:27:08.035491 1703 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:27:08.035767 update_engine[1703]: I20250130 13:27:08.035727 1703 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:27:08.054139 update_engine[1703]: E20250130 13:27:08.054050 1703 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:27:08.054139 update_engine[1703]: I20250130 13:27:08.054143 1703 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 13:27:08.054283 update_engine[1703]: I20250130 13:27:08.054154 1703 omaha_request_action.cc:617] Omaha request response: Jan 30 13:27:08.054283 update_engine[1703]: E20250130 13:27:08.054231 1703 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 30 13:27:08.054283 update_engine[1703]: I20250130 13:27:08.054247 1703 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 30 13:27:08.054283 update_engine[1703]: I20250130 13:27:08.054252 1703 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 13:27:08.054283 update_engine[1703]: I20250130 13:27:08.054257 1703 update_attempter.cc:306] Processing Done. Jan 30 13:27:08.054283 update_engine[1703]: E20250130 13:27:08.054270 1703 update_attempter.cc:619] Update failed. Jan 30 13:27:08.054283 update_engine[1703]: I20250130 13:27:08.054275 1703 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 30 13:27:08.054408 update_engine[1703]: I20250130 13:27:08.054279 1703 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 30 13:27:08.054408 update_engine[1703]: I20250130 13:27:08.054289 1703 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 30 13:27:08.054408 update_engine[1703]: I20250130 13:27:08.054350 1703 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 13:27:08.054408 update_engine[1703]: I20250130 13:27:08.054369 1703 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 13:27:08.054408 update_engine[1703]: I20250130 13:27:08.054374 1703 omaha_request_action.cc:272] Request: Jan 30 13:27:08.054408 update_engine[1703]: Jan 30 13:27:08.054408 update_engine[1703]: Jan 30 13:27:08.054408 update_engine[1703]: Jan 30 13:27:08.054408 update_engine[1703]: Jan 30 13:27:08.054408 update_engine[1703]: Jan 30 13:27:08.054408 update_engine[1703]: Jan 30 13:27:08.054408 update_engine[1703]: I20250130 13:27:08.054381 1703 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:27:08.054601 update_engine[1703]: I20250130 13:27:08.054527 1703 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:27:08.054751 update_engine[1703]: I20250130 13:27:08.054718 1703 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:27:08.054911 locksmithd[1771]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 30 13:27:08.140594 update_engine[1703]: E20250130 13:27:08.140523 1703 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:27:08.140720 update_engine[1703]: I20250130 13:27:08.140623 1703 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 13:27:08.140720 update_engine[1703]: I20250130 13:27:08.140631 1703 omaha_request_action.cc:617] Omaha request response: Jan 30 13:27:08.140720 update_engine[1703]: I20250130 13:27:08.140638 1703 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 13:27:08.140720 update_engine[1703]: I20250130 13:27:08.140643 1703 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 13:27:08.140720 update_engine[1703]: I20250130 13:27:08.140647 1703 update_attempter.cc:306] Processing Done. Jan 30 13:27:08.140720 update_engine[1703]: I20250130 13:27:08.140653 1703 update_attempter.cc:310] Error event sent. Jan 30 13:27:08.140720 update_engine[1703]: I20250130 13:27:08.140662 1703 update_check_scheduler.cc:74] Next update check in 48m30s Jan 30 13:27:08.140983 locksmithd[1771]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 30 13:27:09.992269 systemd[1]: Started sshd@20-10.200.20.42:22-10.200.16.10:43792.service - OpenSSH per-connection server daemon (10.200.16.10:43792). Jan 30 13:27:10.422564 sshd[4932]: Accepted publickey for core from 10.200.16.10 port 43792 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:27:10.423845 sshd-session[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:27:10.427924 systemd-logind[1699]: New session 23 of user core. Jan 30 13:27:10.433246 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:27:10.810814 sshd[4934]: Connection closed by 10.200.16.10 port 43792 Jan 30 13:27:10.811450 sshd-session[4932]: pam_unix(sshd:session): session closed for user core Jan 30 13:27:10.814821 systemd[1]: sshd@20-10.200.20.42:22-10.200.16.10:43792.service: Deactivated successfully. Jan 30 13:27:10.817678 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:27:10.818814 systemd-logind[1699]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:27:10.820016 systemd-logind[1699]: Removed session 23. Jan 30 13:27:15.896356 systemd[1]: Started sshd@21-10.200.20.42:22-10.200.16.10:42890.service - OpenSSH per-connection server daemon (10.200.16.10:42890). Jan 30 13:27:16.339418 sshd[4944]: Accepted publickey for core from 10.200.16.10 port 42890 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:27:16.340728 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:27:16.344894 systemd-logind[1699]: New session 24 of user core. Jan 30 13:27:16.355322 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:27:16.730404 sshd[4946]: Connection closed by 10.200.16.10 port 42890 Jan 30 13:27:16.730893 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Jan 30 13:27:16.734492 systemd[1]: sshd@21-10.200.20.42:22-10.200.16.10:42890.service: Deactivated successfully. Jan 30 13:27:16.736150 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:27:16.736833 systemd-logind[1699]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:27:16.737857 systemd-logind[1699]: Removed session 24. Jan 30 13:27:16.806771 systemd[1]: Started sshd@22-10.200.20.42:22-10.200.16.10:42898.service - OpenSSH per-connection server daemon (10.200.16.10:42898). Jan 30 13:27:17.240103 sshd[4956]: Accepted publickey for core from 10.200.16.10 port 42898 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:27:17.241545 sshd-session[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:27:17.246936 systemd-logind[1699]: New session 25 of user core. Jan 30 13:27:17.251320 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:27:19.148615 containerd[1738]: time="2025-01-30T13:27:19.148549674Z" level=info msg="StopContainer for \"976128d8a5fef2d10a781f4900e355d6cf32ec487080cbcb03cd09264fdf6314\" with timeout 30 (s)" Jan 30 13:27:19.153722 containerd[1738]: time="2025-01-30T13:27:19.149578435Z" level=info msg="Stop container \"976128d8a5fef2d10a781f4900e355d6cf32ec487080cbcb03cd09264fdf6314\" with signal terminated" Jan 30 13:27:19.160878 systemd[1]: cri-containerd-976128d8a5fef2d10a781f4900e355d6cf32ec487080cbcb03cd09264fdf6314.scope: Deactivated successfully. Jan 30 13:27:19.167343 containerd[1738]: time="2025-01-30T13:27:19.166723096Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:27:19.179006 containerd[1738]: time="2025-01-30T13:27:19.178909790Z" level=info msg="StopContainer for \"f9f519dcdfa8257f8e7c9e2e50eacb0744a565b0696a29141f0717ac328d0236\" with timeout 2 (s)" Jan 30 13:27:19.179821 containerd[1738]: time="2025-01-30T13:27:19.179557871Z" level=info msg="Stop container \"f9f519dcdfa8257f8e7c9e2e50eacb0744a565b0696a29141f0717ac328d0236\" with signal terminated" Jan 30 13:27:19.186016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-976128d8a5fef2d10a781f4900e355d6cf32ec487080cbcb03cd09264fdf6314-rootfs.mount: Deactivated successfully. Jan 30 13:27:19.191864 systemd-networkd[1535]: lxc_health: Link DOWN Jan 30 13:27:19.191871 systemd-networkd[1535]: lxc_health: Lost carrier Jan 30 13:27:19.212027 systemd[1]: cri-containerd-f9f519dcdfa8257f8e7c9e2e50eacb0744a565b0696a29141f0717ac328d0236.scope: Deactivated successfully. Jan 30 13:27:19.212981 systemd[1]: cri-containerd-f9f519dcdfa8257f8e7c9e2e50eacb0744a565b0696a29141f0717ac328d0236.scope: Consumed 6.506s CPU time. Jan 30 13:27:19.236613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9f519dcdfa8257f8e7c9e2e50eacb0744a565b0696a29141f0717ac328d0236-rootfs.mount: Deactivated successfully. Jan 30 13:27:19.261172 containerd[1738]: time="2025-01-30T13:27:19.260947128Z" level=info msg="shim disconnected" id=976128d8a5fef2d10a781f4900e355d6cf32ec487080cbcb03cd09264fdf6314 namespace=k8s.io Jan 30 13:27:19.261172 containerd[1738]: time="2025-01-30T13:27:19.261069809Z" level=warning msg="cleaning up after shim disconnected" id=976128d8a5fef2d10a781f4900e355d6cf32ec487080cbcb03cd09264fdf6314 namespace=k8s.io Jan 30 13:27:19.261172 containerd[1738]: time="2025-01-30T13:27:19.261082449Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:27:19.263972 containerd[1738]: time="2025-01-30T13:27:19.263898052Z" level=info msg="shim disconnected" id=f9f519dcdfa8257f8e7c9e2e50eacb0744a565b0696a29141f0717ac328d0236 namespace=k8s.io Jan 30 13:27:19.264036 containerd[1738]: time="2025-01-30T13:27:19.263974732Z" level=warning msg="cleaning up after shim disconnected" id=f9f519dcdfa8257f8e7c9e2e50eacb0744a565b0696a29141f0717ac328d0236 namespace=k8s.io Jan 30 13:27:19.264036 containerd[1738]: time="2025-01-30T13:27:19.263983252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:27:19.286054 containerd[1738]: time="2025-01-30T13:27:19.286007478Z" level=info msg="StopContainer for \"f9f519dcdfa8257f8e7c9e2e50eacb0744a565b0696a29141f0717ac328d0236\" returns successfully" Jan 30 13:27:19.286979 containerd[1738]: time="2025-01-30T13:27:19.286754159Z" level=info msg="StopPodSandbox for \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\"" Jan 30 13:27:19.286979 containerd[1738]: time="2025-01-30T13:27:19.286808639Z" level=info msg="Container to stop \"8a7fd519b93bb1474497144418f67b50fdb01c67ff4d2c01da2cb014bbef0fcb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:27:19.286979 containerd[1738]: time="2025-01-30T13:27:19.286819879Z" level=info msg="Container to stop \"e1be7672f875cb02383f35ad8cb35f8dca3f22ee8e63be6477c2b2fb6bc928ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:27:19.286979 containerd[1738]: time="2025-01-30T13:27:19.286828199Z" level=info msg="Container to stop \"ad12a73c3fc372aa3ae23e07c5fedb351220e3134168b7442a3081afba069ef0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:27:19.286979 containerd[1738]: time="2025-01-30T13:27:19.286836479Z" level=info msg="Container to stop \"f9f519dcdfa8257f8e7c9e2e50eacb0744a565b0696a29141f0717ac328d0236\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:27:19.286979 containerd[1738]: time="2025-01-30T13:27:19.286844799Z" level=info msg="Container to stop \"24f5f91e667d7ca2e7b37ecfc5205de0c458d69b2285afccbc2744b003178f4d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:27:19.288972 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63-shm.mount: Deactivated successfully. Jan 30 13:27:19.289778 containerd[1738]: time="2025-01-30T13:27:19.289555003Z" level=info msg="StopContainer for \"976128d8a5fef2d10a781f4900e355d6cf32ec487080cbcb03cd09264fdf6314\" returns successfully" Jan 30 13:27:19.290971 containerd[1738]: time="2025-01-30T13:27:19.290900924Z" level=info msg="StopPodSandbox for \"1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36\"" Jan 30 13:27:19.291223 containerd[1738]: time="2025-01-30T13:27:19.291200885Z" level=info msg="Container to stop \"976128d8a5fef2d10a781f4900e355d6cf32ec487080cbcb03cd09264fdf6314\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:27:19.293436 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36-shm.mount: Deactivated successfully. Jan 30 13:27:19.298325 systemd[1]: cri-containerd-23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63.scope: Deactivated successfully. Jan 30 13:27:19.302454 systemd[1]: cri-containerd-1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36.scope: Deactivated successfully. Jan 30 13:27:19.346133 containerd[1738]: time="2025-01-30T13:27:19.345790430Z" level=info msg="shim disconnected" id=1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36 namespace=k8s.io Jan 30 13:27:19.346133 containerd[1738]: time="2025-01-30T13:27:19.345905750Z" level=warning msg="cleaning up after shim disconnected" id=1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36 namespace=k8s.io Jan 30 13:27:19.346133 containerd[1738]: time="2025-01-30T13:27:19.345914830Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:27:19.349842 containerd[1738]: time="2025-01-30T13:27:19.349792235Z" level=info msg="shim disconnected" id=23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63 namespace=k8s.io Jan 30 13:27:19.350292 containerd[1738]: time="2025-01-30T13:27:19.350150515Z" level=warning msg="cleaning up after shim disconnected" id=23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63 namespace=k8s.io Jan 30 13:27:19.350292 containerd[1738]: time="2025-01-30T13:27:19.350180275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:27:19.363080 containerd[1738]: time="2025-01-30T13:27:19.362938050Z" level=info msg="TearDown network for sandbox \"1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36\" successfully" Jan 30 13:27:19.363080 containerd[1738]: time="2025-01-30T13:27:19.362974890Z" level=info msg="StopPodSandbox for \"1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36\" returns successfully" Jan 30 13:27:19.364212 containerd[1738]: time="2025-01-30T13:27:19.364158412Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:27:19Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:27:19.365615 containerd[1738]: time="2025-01-30T13:27:19.365499973Z" level=info msg="TearDown network for sandbox \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\" successfully" Jan 30 13:27:19.365615 containerd[1738]: time="2025-01-30T13:27:19.365525493Z" level=info msg="StopPodSandbox for \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\" returns successfully" Jan 30 13:27:19.514699 kubelet[3363]: I0130 13:27:19.514065 3363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-clustermesh-secrets\") pod \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " Jan 30 13:27:19.514699 kubelet[3363]: I0130 13:27:19.514153 3363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-cilium-run\") pod \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " Jan 30 13:27:19.514699 kubelet[3363]: I0130 13:27:19.514185 3363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-host-proc-sys-kernel\") pod \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " Jan 30 13:27:19.514699 kubelet[3363]: I0130 13:27:19.514217 3363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-bpf-maps\") pod \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " Jan 30 13:27:19.514699 kubelet[3363]: I0130 13:27:19.514247 3363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-lib-modules\") pod \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " Jan 30 13:27:19.514699 kubelet[3363]: I0130 13:27:19.514260 3363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-cni-path\") pod \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " Jan 30 13:27:19.515156 kubelet[3363]: I0130 13:27:19.514277 3363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df0a0c39-87c3-42ba-aec0-f4a71b21a71b-cilium-config-path\") pod \"df0a0c39-87c3-42ba-aec0-f4a71b21a71b\" (UID: \"df0a0c39-87c3-42ba-aec0-f4a71b21a71b\") " Jan 30 13:27:19.515156 kubelet[3363]: I0130 13:27:19.514294 3363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-cilium-config-path\") pod \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " Jan 30 13:27:19.515156 kubelet[3363]: I0130 13:27:19.514308 3363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-hostproc\") pod \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " Jan 30 13:27:19.515156 kubelet[3363]: I0130 13:27:19.514323 3363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-host-proc-sys-net\") pod \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " Jan 30 13:27:19.515156 kubelet[3363]: I0130 13:27:19.514339 3363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-hubble-tls\") pod \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " Jan 30 13:27:19.515156 kubelet[3363]: I0130 13:27:19.514355 3363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-cilium-cgroup\") pod \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " Jan 30 13:27:19.515283 kubelet[3363]: I0130 13:27:19.514374 3363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56fvv\" (UniqueName: \"kubernetes.io/projected/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-kube-api-access-56fvv\") pod \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " Jan 30 13:27:19.515283 kubelet[3363]: I0130 13:27:19.514389 3363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-etc-cni-netd\") pod \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " Jan 30 13:27:19.515283 kubelet[3363]: I0130 13:27:19.514403 3363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-xtables-lock\") pod \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\" (UID: \"3e54aae8-0a7a-4dfb-b646-da9358b0a0ce\") " Jan 30 13:27:19.515283 kubelet[3363]: I0130 13:27:19.514420 3363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhsx2\" (UniqueName: \"kubernetes.io/projected/df0a0c39-87c3-42ba-aec0-f4a71b21a71b-kube-api-access-lhsx2\") pod \"df0a0c39-87c3-42ba-aec0-f4a71b21a71b\" (UID: \"df0a0c39-87c3-42ba-aec0-f4a71b21a71b\") " Jan 30 13:27:19.516822 kubelet[3363]: I0130 13:27:19.516795 3363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" (UID: "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:27:19.518207 kubelet[3363]: I0130 13:27:19.516952 3363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" (UID: "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:27:19.518207 kubelet[3363]: I0130 13:27:19.516975 3363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" (UID: "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:27:19.518207 kubelet[3363]: I0130 13:27:19.516989 3363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" (UID: "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:27:19.518207 kubelet[3363]: I0130 13:27:19.517024 3363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-cni-path" (OuterVolumeSpecName: "cni-path") pod "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" (UID: "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:27:19.519345 kubelet[3363]: I0130 13:27:19.519322 3363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-hostproc" (OuterVolumeSpecName: "hostproc") pod "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" (UID: "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:27:19.519497 kubelet[3363]: I0130 13:27:19.519483 3363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" (UID: "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:27:19.520069 kubelet[3363]: I0130 13:27:19.520029 3363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" (UID: "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:27:19.520215 kubelet[3363]: I0130 13:27:19.520182 3363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" (UID: "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:19.520259 kubelet[3363]: I0130 13:27:19.520185 3363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" (UID: "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:27:19.520328 kubelet[3363]: I0130 13:27:19.520307 3363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" (UID: "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:27:19.520360 kubelet[3363]: I0130 13:27:19.520335 3363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" (UID: "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:27:19.520562 kubelet[3363]: I0130 13:27:19.520525 3363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df0a0c39-87c3-42ba-aec0-f4a71b21a71b-kube-api-access-lhsx2" (OuterVolumeSpecName: "kube-api-access-lhsx2") pod "df0a0c39-87c3-42ba-aec0-f4a71b21a71b" (UID: "df0a0c39-87c3-42ba-aec0-f4a71b21a71b"). InnerVolumeSpecName "kube-api-access-lhsx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:19.521004 kubelet[3363]: I0130 13:27:19.520981 3363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0a0c39-87c3-42ba-aec0-f4a71b21a71b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "df0a0c39-87c3-42ba-aec0-f4a71b21a71b" (UID: "df0a0c39-87c3-42ba-aec0-f4a71b21a71b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:27:19.522636 kubelet[3363]: I0130 13:27:19.522553 3363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-kube-api-access-56fvv" (OuterVolumeSpecName: "kube-api-access-56fvv") pod "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" (UID: "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce"). InnerVolumeSpecName "kube-api-access-56fvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:19.522758 kubelet[3363]: I0130 13:27:19.522578 3363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" (UID: "3e54aae8-0a7a-4dfb-b646-da9358b0a0ce"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:19.614840 kubelet[3363]: I0130 13:27:19.614804 3363 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-cilium-run\") on node \"ci-4186.1.0-a-4db8cd7df2\" DevicePath \"\"" Jan 30 13:27:19.614840 kubelet[3363]: I0130 13:27:19.614837 3363 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-host-proc-sys-kernel\") on node \"ci-4186.1.0-a-4db8cd7df2\" DevicePath \"\"" Jan 30 13:27:19.617144 kubelet[3363]: I0130 13:27:19.614846 3363 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-clustermesh-secrets\") on node \"ci-4186.1.0-a-4db8cd7df2\" DevicePath \"\"" Jan 30 13:27:19.617144 kubelet[3363]: I0130 13:27:19.615021 3363 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-cni-path\") on node \"ci-4186.1.0-a-4db8cd7df2\" DevicePath \"\"" Jan 30 13:27:19.617144 kubelet[3363]: I0130 13:27:19.615033 3363 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df0a0c39-87c3-42ba-aec0-f4a71b21a71b-cilium-config-path\") on node \"ci-4186.1.0-a-4db8cd7df2\" DevicePath \"\"" Jan 30 13:27:19.617144 kubelet[3363]: I0130 13:27:19.615045 3363 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-bpf-maps\") on node \"ci-4186.1.0-a-4db8cd7df2\" DevicePath \"\"" Jan 30 13:27:19.617144 kubelet[3363]: I0130 13:27:19.615209 3363 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-lib-modules\") on node \"ci-4186.1.0-a-4db8cd7df2\" DevicePath \"\"" Jan 30 13:27:19.617144 kubelet[3363]: I0130 13:27:19.615221 3363 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-cilium-config-path\") on node \"ci-4186.1.0-a-4db8cd7df2\" DevicePath \"\"" Jan 30 13:27:19.617144 kubelet[3363]: I0130 13:27:19.615230 3363 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-hostproc\") on node \"ci-4186.1.0-a-4db8cd7df2\" DevicePath \"\"" Jan 30 13:27:19.617144 kubelet[3363]: I0130 13:27:19.615241 3363 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-host-proc-sys-net\") on node \"ci-4186.1.0-a-4db8cd7df2\" DevicePath \"\"" Jan 30 13:27:19.617403 kubelet[3363]: I0130 13:27:19.615253 3363 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-hubble-tls\") on node \"ci-4186.1.0-a-4db8cd7df2\" DevicePath \"\"" Jan 30 13:27:19.617403 kubelet[3363]: I0130 13:27:19.615264 3363 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-cilium-cgroup\") on node \"ci-4186.1.0-a-4db8cd7df2\" DevicePath \"\"" Jan 30 13:27:19.617403 kubelet[3363]: I0130 13:27:19.615273 3363 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-56fvv\" (UniqueName: \"kubernetes.io/projected/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-kube-api-access-56fvv\") on node \"ci-4186.1.0-a-4db8cd7df2\" DevicePath \"\"" Jan 30 13:27:19.617403 kubelet[3363]: I0130 13:27:19.615283 3363 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-etc-cni-netd\") on node \"ci-4186.1.0-a-4db8cd7df2\" DevicePath \"\"" Jan 30 13:27:19.617403 kubelet[3363]: I0130 13:27:19.615296 3363 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce-xtables-lock\") on node \"ci-4186.1.0-a-4db8cd7df2\" DevicePath \"\"" Jan 30 13:27:19.617403 kubelet[3363]: I0130 13:27:19.615304 3363 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lhsx2\" (UniqueName: \"kubernetes.io/projected/df0a0c39-87c3-42ba-aec0-f4a71b21a71b-kube-api-access-lhsx2\") on node \"ci-4186.1.0-a-4db8cd7df2\" DevicePath \"\"" Jan 30 13:27:20.032209 kubelet[3363]: E0130 13:27:20.032143 3363 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:27:20.142435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36-rootfs.mount: Deactivated successfully. Jan 30 13:27:20.142539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63-rootfs.mount: Deactivated successfully. Jan 30 13:27:20.142588 systemd[1]: var-lib-kubelet-pods-df0a0c39\x2d87c3\x2d42ba\x2daec0\x2df4a71b21a71b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlhsx2.mount: Deactivated successfully. Jan 30 13:27:20.142648 systemd[1]: var-lib-kubelet-pods-3e54aae8\x2d0a7a\x2d4dfb\x2db646\x2dda9358b0a0ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d56fvv.mount: Deactivated successfully. Jan 30 13:27:20.142696 systemd[1]: var-lib-kubelet-pods-3e54aae8\x2d0a7a\x2d4dfb\x2db646\x2dda9358b0a0ce-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:27:20.144239 systemd[1]: var-lib-kubelet-pods-3e54aae8\x2d0a7a\x2d4dfb\x2db646\x2dda9358b0a0ce-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:27:20.319895 kubelet[3363]: I0130 13:27:20.319784 3363 scope.go:117] "RemoveContainer" containerID="976128d8a5fef2d10a781f4900e355d6cf32ec487080cbcb03cd09264fdf6314" Jan 30 13:27:20.323404 containerd[1738]: time="2025-01-30T13:27:20.322732039Z" level=info msg="RemoveContainer for \"976128d8a5fef2d10a781f4900e355d6cf32ec487080cbcb03cd09264fdf6314\"" Jan 30 13:27:20.326139 systemd[1]: Removed slice kubepods-besteffort-poddf0a0c39_87c3_42ba_aec0_f4a71b21a71b.slice - libcontainer container kubepods-besteffort-poddf0a0c39_87c3_42ba_aec0_f4a71b21a71b.slice. Jan 30 13:27:20.332354 systemd[1]: Removed slice kubepods-burstable-pod3e54aae8_0a7a_4dfb_b646_da9358b0a0ce.slice - libcontainer container kubepods-burstable-pod3e54aae8_0a7a_4dfb_b646_da9358b0a0ce.slice. Jan 30 13:27:20.332461 systemd[1]: kubepods-burstable-pod3e54aae8_0a7a_4dfb_b646_da9358b0a0ce.slice: Consumed 6.579s CPU time. Jan 30 13:27:20.346132 containerd[1738]: time="2025-01-30T13:27:20.346014267Z" level=info msg="RemoveContainer for \"976128d8a5fef2d10a781f4900e355d6cf32ec487080cbcb03cd09264fdf6314\" returns successfully" Jan 30 13:27:20.346637 kubelet[3363]: I0130 13:27:20.346594 3363 scope.go:117] "RemoveContainer" containerID="f9f519dcdfa8257f8e7c9e2e50eacb0744a565b0696a29141f0717ac328d0236" Jan 30 13:27:20.350548 containerd[1738]: time="2025-01-30T13:27:20.349600471Z" level=info msg="RemoveContainer for \"f9f519dcdfa8257f8e7c9e2e50eacb0744a565b0696a29141f0717ac328d0236\"" Jan 30 13:27:20.359841 containerd[1738]: time="2025-01-30T13:27:20.359789963Z" level=info msg="RemoveContainer for \"f9f519dcdfa8257f8e7c9e2e50eacb0744a565b0696a29141f0717ac328d0236\" returns successfully" Jan 30 13:27:20.360052 kubelet[3363]: I0130 13:27:20.360025 3363 scope.go:117] "RemoveContainer" containerID="8a7fd519b93bb1474497144418f67b50fdb01c67ff4d2c01da2cb014bbef0fcb" Jan 30 13:27:20.361993 containerd[1738]: time="2025-01-30T13:27:20.361949926Z" level=info msg="RemoveContainer for \"8a7fd519b93bb1474497144418f67b50fdb01c67ff4d2c01da2cb014bbef0fcb\"" Jan 30 13:27:20.371402 containerd[1738]: time="2025-01-30T13:27:20.371358177Z" level=info msg="RemoveContainer for \"8a7fd519b93bb1474497144418f67b50fdb01c67ff4d2c01da2cb014bbef0fcb\" returns successfully" Jan 30 13:27:20.371647 kubelet[3363]: I0130 13:27:20.371606 3363 scope.go:117] "RemoveContainer" containerID="ad12a73c3fc372aa3ae23e07c5fedb351220e3134168b7442a3081afba069ef0" Jan 30 13:27:20.372852 containerd[1738]: time="2025-01-30T13:27:20.372785179Z" level=info msg="RemoveContainer for \"ad12a73c3fc372aa3ae23e07c5fedb351220e3134168b7442a3081afba069ef0\"" Jan 30 13:27:20.380602 containerd[1738]: time="2025-01-30T13:27:20.380539588Z" level=info msg="RemoveContainer for \"ad12a73c3fc372aa3ae23e07c5fedb351220e3134168b7442a3081afba069ef0\" returns successfully" Jan 30 13:27:20.380893 kubelet[3363]: I0130 13:27:20.380811 3363 scope.go:117] "RemoveContainer" containerID="e1be7672f875cb02383f35ad8cb35f8dca3f22ee8e63be6477c2b2fb6bc928ff" Jan 30 13:27:20.383765 containerd[1738]: time="2025-01-30T13:27:20.383720472Z" level=info msg="RemoveContainer for \"e1be7672f875cb02383f35ad8cb35f8dca3f22ee8e63be6477c2b2fb6bc928ff\"" Jan 30 13:27:20.392505 containerd[1738]: time="2025-01-30T13:27:20.392462762Z" level=info msg="RemoveContainer for \"e1be7672f875cb02383f35ad8cb35f8dca3f22ee8e63be6477c2b2fb6bc928ff\" returns successfully" Jan 30 13:27:20.392870 kubelet[3363]: I0130 13:27:20.392765 3363 scope.go:117] "RemoveContainer" containerID="24f5f91e667d7ca2e7b37ecfc5205de0c458d69b2285afccbc2744b003178f4d" Jan 30 13:27:20.394038 containerd[1738]: time="2025-01-30T13:27:20.394008804Z" level=info msg="RemoveContainer for \"24f5f91e667d7ca2e7b37ecfc5205de0c458d69b2285afccbc2744b003178f4d\"" Jan 30 13:27:20.401259 containerd[1738]: time="2025-01-30T13:27:20.401223773Z" level=info msg="RemoveContainer for \"24f5f91e667d7ca2e7b37ecfc5205de0c458d69b2285afccbc2744b003178f4d\" returns successfully" Jan 30 13:27:20.929082 kubelet[3363]: I0130 13:27:20.928458 3363 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" path="/var/lib/kubelet/pods/3e54aae8-0a7a-4dfb-b646-da9358b0a0ce/volumes" Jan 30 13:27:20.929082 kubelet[3363]: I0130 13:27:20.928971 3363 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df0a0c39-87c3-42ba-aec0-f4a71b21a71b" path="/var/lib/kubelet/pods/df0a0c39-87c3-42ba-aec0-f4a71b21a71b/volumes" Jan 30 13:27:21.140969 sshd[4958]: Connection closed by 10.200.16.10 port 42898 Jan 30 13:27:21.141585 sshd-session[4956]: pam_unix(sshd:session): session closed for user core Jan 30 13:27:21.144469 systemd[1]: sshd@22-10.200.20.42:22-10.200.16.10:42898.service: Deactivated successfully. Jan 30 13:27:21.147053 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:27:21.147445 systemd[1]: session-25.scope: Consumed 1.011s CPU time. Jan 30 13:27:21.148088 systemd-logind[1699]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:27:21.150193 systemd-logind[1699]: Removed session 25. Jan 30 13:27:21.224394 systemd[1]: Started sshd@23-10.200.20.42:22-10.200.16.10:42908.service - OpenSSH per-connection server daemon (10.200.16.10:42908). Jan 30 13:27:21.382516 kubelet[3363]: I0130 13:27:21.382303 3363 setters.go:600] "Node became not ready" node="ci-4186.1.0-a-4db8cd7df2" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T13:27:21Z","lastTransitionTime":"2025-01-30T13:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 13:27:21.650911 sshd[5116]: Accepted publickey for core from 10.200.16.10 port 42908 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:27:21.652239 sshd-session[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:27:21.656744 systemd-logind[1699]: New session 26 of user core. Jan 30 13:27:21.659304 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:27:23.576029 kubelet[3363]: E0130 13:27:23.575951 3363 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" containerName="mount-bpf-fs" Jan 30 13:27:23.576029 kubelet[3363]: E0130 13:27:23.575983 3363 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" containerName="cilium-agent" Jan 30 13:27:23.576029 kubelet[3363]: E0130 13:27:23.575991 3363 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" containerName="mount-cgroup" Jan 30 13:27:23.576029 kubelet[3363]: E0130 13:27:23.575996 3363 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" containerName="apply-sysctl-overwrites" Jan 30 13:27:23.576029 kubelet[3363]: E0130 13:27:23.576002 3363 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="df0a0c39-87c3-42ba-aec0-f4a71b21a71b" containerName="cilium-operator" Jan 30 13:27:23.577020 kubelet[3363]: E0130 13:27:23.576008 3363 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" containerName="clean-cilium-state" Jan 30 13:27:23.577020 kubelet[3363]: I0130 13:27:23.576567 3363 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e54aae8-0a7a-4dfb-b646-da9358b0a0ce" containerName="cilium-agent" Jan 30 13:27:23.577020 kubelet[3363]: I0130 13:27:23.576581 3363 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0a0c39-87c3-42ba-aec0-f4a71b21a71b" containerName="cilium-operator" Jan 30 13:27:23.587461 systemd[1]: Created slice kubepods-burstable-pod7ca3b9d3_af52_4318_a5ac_0ef64d423823.slice - libcontainer container kubepods-burstable-pod7ca3b9d3_af52_4318_a5ac_0ef64d423823.slice. Jan 30 13:27:23.623128 sshd[5118]: Connection closed by 10.200.16.10 port 42908 Jan 30 13:27:23.623673 sshd-session[5116]: pam_unix(sshd:session): session closed for user core Jan 30 13:27:23.628989 systemd[1]: sshd@23-10.200.20.42:22-10.200.16.10:42908.service: Deactivated successfully. Jan 30 13:27:23.630674 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:27:23.633005 systemd[1]: session-26.scope: Consumed 1.584s CPU time. Jan 30 13:27:23.638093 systemd-logind[1699]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:27:23.639880 systemd-logind[1699]: Removed session 26. Jan 30 13:27:23.698413 systemd[1]: Started sshd@24-10.200.20.42:22-10.200.16.10:42920.service - OpenSSH per-connection server daemon (10.200.16.10:42920). Jan 30 13:27:23.732849 kubelet[3363]: I0130 13:27:23.732752 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ca3b9d3-af52-4318-a5ac-0ef64d423823-cni-path\") pod \"cilium-54scq\" (UID: \"7ca3b9d3-af52-4318-a5ac-0ef64d423823\") " pod="kube-system/cilium-54scq" Jan 30 13:27:23.732849 kubelet[3363]: I0130 13:27:23.732795 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ca3b9d3-af52-4318-a5ac-0ef64d423823-clustermesh-secrets\") pod \"cilium-54scq\" (UID: \"7ca3b9d3-af52-4318-a5ac-0ef64d423823\") " pod="kube-system/cilium-54scq" Jan 30 13:27:23.732849 kubelet[3363]: I0130 13:27:23.732819 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzvwm\" (UniqueName: \"kubernetes.io/projected/7ca3b9d3-af52-4318-a5ac-0ef64d423823-kube-api-access-kzvwm\") pod \"cilium-54scq\" (UID: \"7ca3b9d3-af52-4318-a5ac-0ef64d423823\") " pod="kube-system/cilium-54scq" Jan 30 13:27:23.732849 kubelet[3363]: I0130 13:27:23.732839 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ca3b9d3-af52-4318-a5ac-0ef64d423823-hostproc\") pod \"cilium-54scq\" (UID: \"7ca3b9d3-af52-4318-a5ac-0ef64d423823\") " pod="kube-system/cilium-54scq" Jan 30 13:27:23.732849 kubelet[3363]: I0130 13:27:23.732856 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7ca3b9d3-af52-4318-a5ac-0ef64d423823-cilium-ipsec-secrets\") pod \"cilium-54scq\" (UID: \"7ca3b9d3-af52-4318-a5ac-0ef64d423823\") " pod="kube-system/cilium-54scq" Jan 30 13:27:23.733083 kubelet[3363]: I0130 13:27:23.732871 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ca3b9d3-af52-4318-a5ac-0ef64d423823-cilium-cgroup\") pod \"cilium-54scq\" (UID: \"7ca3b9d3-af52-4318-a5ac-0ef64d423823\") " pod="kube-system/cilium-54scq" Jan 30 13:27:23.733083 kubelet[3363]: I0130 13:27:23.732889 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ca3b9d3-af52-4318-a5ac-0ef64d423823-cilium-run\") pod \"cilium-54scq\" (UID: \"7ca3b9d3-af52-4318-a5ac-0ef64d423823\") " pod="kube-system/cilium-54scq" Jan 30 13:27:23.733083 kubelet[3363]: I0130 13:27:23.732907 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ca3b9d3-af52-4318-a5ac-0ef64d423823-host-proc-sys-kernel\") pod \"cilium-54scq\" (UID: \"7ca3b9d3-af52-4318-a5ac-0ef64d423823\") " pod="kube-system/cilium-54scq" Jan 30 13:27:23.733083 kubelet[3363]: I0130 13:27:23.732921 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ca3b9d3-af52-4318-a5ac-0ef64d423823-lib-modules\") pod \"cilium-54scq\" (UID: \"7ca3b9d3-af52-4318-a5ac-0ef64d423823\") " pod="kube-system/cilium-54scq" Jan 30 13:27:23.733083 kubelet[3363]: I0130 13:27:23.732948 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ca3b9d3-af52-4318-a5ac-0ef64d423823-cilium-config-path\") pod \"cilium-54scq\" (UID: \"7ca3b9d3-af52-4318-a5ac-0ef64d423823\") " pod="kube-system/cilium-54scq" Jan 30 13:27:23.733083 kubelet[3363]: I0130 13:27:23.732966 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ca3b9d3-af52-4318-a5ac-0ef64d423823-bpf-maps\") pod \"cilium-54scq\" (UID: \"7ca3b9d3-af52-4318-a5ac-0ef64d423823\") " pod="kube-system/cilium-54scq" Jan 30 13:27:23.733247 kubelet[3363]: I0130 13:27:23.732980 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ca3b9d3-af52-4318-a5ac-0ef64d423823-xtables-lock\") pod \"cilium-54scq\" (UID: \"7ca3b9d3-af52-4318-a5ac-0ef64d423823\") " pod="kube-system/cilium-54scq" Jan 30 13:27:23.733247 kubelet[3363]: I0130 13:27:23.732995 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ca3b9d3-af52-4318-a5ac-0ef64d423823-hubble-tls\") pod \"cilium-54scq\" (UID: \"7ca3b9d3-af52-4318-a5ac-0ef64d423823\") " pod="kube-system/cilium-54scq" Jan 30 13:27:23.733247 kubelet[3363]: I0130 13:27:23.733011 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ca3b9d3-af52-4318-a5ac-0ef64d423823-etc-cni-netd\") pod \"cilium-54scq\" (UID: \"7ca3b9d3-af52-4318-a5ac-0ef64d423823\") " pod="kube-system/cilium-54scq" Jan 30 13:27:23.733247 kubelet[3363]: I0130 13:27:23.733028 3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ca3b9d3-af52-4318-a5ac-0ef64d423823-host-proc-sys-net\") pod \"cilium-54scq\" (UID: \"7ca3b9d3-af52-4318-a5ac-0ef64d423823\") " pod="kube-system/cilium-54scq" Jan 30 13:27:23.894243 containerd[1738]: time="2025-01-30T13:27:23.893845911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-54scq,Uid:7ca3b9d3-af52-4318-a5ac-0ef64d423823,Namespace:kube-system,Attempt:0,}" Jan 30 13:27:23.987437 containerd[1738]: time="2025-01-30T13:27:23.987269103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:27:23.987437 containerd[1738]: time="2025-01-30T13:27:23.987334863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:27:23.987437 containerd[1738]: time="2025-01-30T13:27:23.987351143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:27:23.987693 containerd[1738]: time="2025-01-30T13:27:23.987629543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:27:24.008294 systemd[1]: Started cri-containerd-c09049c89f80d266d2a80344732888d11397af58cd9ba04e3564c4c1190ed77d.scope - libcontainer container c09049c89f80d266d2a80344732888d11397af58cd9ba04e3564c4c1190ed77d. Jan 30 13:27:24.028044 containerd[1738]: time="2025-01-30T13:27:24.028008791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-54scq,Uid:7ca3b9d3-af52-4318-a5ac-0ef64d423823,Namespace:kube-system,Attempt:0,} returns sandbox id \"c09049c89f80d266d2a80344732888d11397af58cd9ba04e3564c4c1190ed77d\"" Jan 30 13:27:24.030962 containerd[1738]: time="2025-01-30T13:27:24.030836515Z" level=info msg="CreateContainer within sandbox \"c09049c89f80d266d2a80344732888d11397af58cd9ba04e3564c4c1190ed77d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:27:24.065787 containerd[1738]: time="2025-01-30T13:27:24.065730517Z" level=info msg="CreateContainer within sandbox \"c09049c89f80d266d2a80344732888d11397af58cd9ba04e3564c4c1190ed77d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f27d3eedee7f5bb86ffc8709c80bd612d8cf79694f90a9972e07242be4135205\"" Jan 30 13:27:24.066510 containerd[1738]: time="2025-01-30T13:27:24.066442597Z" level=info msg="StartContainer for \"f27d3eedee7f5bb86ffc8709c80bd612d8cf79694f90a9972e07242be4135205\"" Jan 30 13:27:24.095297 systemd[1]: Started cri-containerd-f27d3eedee7f5bb86ffc8709c80bd612d8cf79694f90a9972e07242be4135205.scope - libcontainer container f27d3eedee7f5bb86ffc8709c80bd612d8cf79694f90a9972e07242be4135205. Jan 30 13:27:24.121270 containerd[1738]: time="2025-01-30T13:27:24.121185103Z" level=info msg="StartContainer for \"f27d3eedee7f5bb86ffc8709c80bd612d8cf79694f90a9972e07242be4135205\" returns successfully" Jan 30 13:27:24.128777 systemd[1]: cri-containerd-f27d3eedee7f5bb86ffc8709c80bd612d8cf79694f90a9972e07242be4135205.scope: Deactivated successfully. Jan 30 13:27:24.141176 sshd[5127]: Accepted publickey for core from 10.200.16.10 port 42920 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:27:24.141657 sshd-session[5127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:27:24.147558 systemd-logind[1699]: New session 27 of user core. Jan 30 13:27:24.153282 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 13:27:24.223874 containerd[1738]: time="2025-01-30T13:27:24.223811826Z" level=info msg="shim disconnected" id=f27d3eedee7f5bb86ffc8709c80bd612d8cf79694f90a9972e07242be4135205 namespace=k8s.io Jan 30 13:27:24.223874 containerd[1738]: time="2025-01-30T13:27:24.223865626Z" level=warning msg="cleaning up after shim disconnected" id=f27d3eedee7f5bb86ffc8709c80bd612d8cf79694f90a9972e07242be4135205 namespace=k8s.io Jan 30 13:27:24.223874 containerd[1738]: time="2025-01-30T13:27:24.223873666Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:27:24.340552 containerd[1738]: time="2025-01-30T13:27:24.340335325Z" level=info msg="CreateContainer within sandbox \"c09049c89f80d266d2a80344732888d11397af58cd9ba04e3564c4c1190ed77d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:27:24.371672 containerd[1738]: time="2025-01-30T13:27:24.371628082Z" level=info msg="CreateContainer within sandbox \"c09049c89f80d266d2a80344732888d11397af58cd9ba04e3564c4c1190ed77d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b16e7fb969af26553b442f90cb760592db994d8847180a92905f889b4b0772ac\"" Jan 30 13:27:24.374215 containerd[1738]: time="2025-01-30T13:27:24.374161885Z" level=info msg="StartContainer for \"b16e7fb969af26553b442f90cb760592db994d8847180a92905f889b4b0772ac\"" Jan 30 13:27:24.404304 systemd[1]: Started cri-containerd-b16e7fb969af26553b442f90cb760592db994d8847180a92905f889b4b0772ac.scope - libcontainer container b16e7fb969af26553b442f90cb760592db994d8847180a92905f889b4b0772ac. Jan 30 13:27:24.432128 containerd[1738]: time="2025-01-30T13:27:24.432073395Z" level=info msg="StartContainer for \"b16e7fb969af26553b442f90cb760592db994d8847180a92905f889b4b0772ac\" returns successfully" Jan 30 13:27:24.432578 systemd[1]: cri-containerd-b16e7fb969af26553b442f90cb760592db994d8847180a92905f889b4b0772ac.scope: Deactivated successfully. Jan 30 13:27:24.463967 containerd[1738]: time="2025-01-30T13:27:24.463877273Z" level=info msg="shim disconnected" id=b16e7fb969af26553b442f90cb760592db994d8847180a92905f889b4b0772ac namespace=k8s.io Jan 30 13:27:24.463967 containerd[1738]: time="2025-01-30T13:27:24.463957513Z" level=warning msg="cleaning up after shim disconnected" id=b16e7fb969af26553b442f90cb760592db994d8847180a92905f889b4b0772ac namespace=k8s.io Jan 30 13:27:24.463967 containerd[1738]: time="2025-01-30T13:27:24.463966873Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:27:24.466879 sshd[5225]: Connection closed by 10.200.16.10 port 42920 Jan 30 13:27:24.467300 sshd-session[5127]: pam_unix(sshd:session): session closed for user core Jan 30 13:27:24.471962 systemd[1]: sshd@24-10.200.20.42:22-10.200.16.10:42920.service: Deactivated successfully. Jan 30 13:27:24.474865 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 13:27:24.475893 systemd-logind[1699]: Session 27 logged out. Waiting for processes to exit. Jan 30 13:27:24.477575 systemd-logind[1699]: Removed session 27. Jan 30 13:27:24.547978 systemd[1]: Started sshd@25-10.200.20.42:22-10.200.16.10:42934.service - OpenSSH per-connection server daemon (10.200.16.10:42934). Jan 30 13:27:24.940486 containerd[1738]: time="2025-01-30T13:27:24.940440563Z" level=info msg="StopPodSandbox for \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\"" Jan 30 13:27:24.940836 containerd[1738]: time="2025-01-30T13:27:24.940535283Z" level=info msg="TearDown network for sandbox \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\" successfully" Jan 30 13:27:24.940836 containerd[1738]: time="2025-01-30T13:27:24.940544843Z" level=info msg="StopPodSandbox for \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\" returns successfully" Jan 30 13:27:24.941000 containerd[1738]: time="2025-01-30T13:27:24.940968323Z" level=info msg="RemovePodSandbox for \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\"" Jan 30 13:27:24.941000 containerd[1738]: time="2025-01-30T13:27:24.940999003Z" level=info msg="Forcibly stopping sandbox \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\"" Jan 30 13:27:24.941056 containerd[1738]: time="2025-01-30T13:27:24.941042283Z" level=info msg="TearDown network for sandbox \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\" successfully" Jan 30 13:27:24.950205 containerd[1738]: time="2025-01-30T13:27:24.950162814Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:27:24.950326 containerd[1738]: time="2025-01-30T13:27:24.950224254Z" level=info msg="RemovePodSandbox \"23a87cbe55f064d60547cd1fce6a1956895af81ef063ed7ec9e2997b663f5e63\" returns successfully" Jan 30 13:27:24.950881 containerd[1738]: time="2025-01-30T13:27:24.950722975Z" level=info msg="StopPodSandbox for \"1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36\"" Jan 30 13:27:24.950881 containerd[1738]: time="2025-01-30T13:27:24.950808495Z" level=info msg="TearDown network for sandbox \"1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36\" successfully" Jan 30 13:27:24.950881 containerd[1738]: time="2025-01-30T13:27:24.950819215Z" level=info msg="StopPodSandbox for \"1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36\" returns successfully" Jan 30 13:27:24.951165 containerd[1738]: time="2025-01-30T13:27:24.951138856Z" level=info msg="RemovePodSandbox for \"1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36\"" Jan 30 13:27:24.951205 containerd[1738]: time="2025-01-30T13:27:24.951167336Z" level=info msg="Forcibly stopping sandbox \"1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36\"" Jan 30 13:27:24.951232 containerd[1738]: time="2025-01-30T13:27:24.951214016Z" level=info msg="TearDown network for sandbox \"1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36\" successfully" Jan 30 13:27:24.958806 containerd[1738]: time="2025-01-30T13:27:24.958766185Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:27:24.958883 containerd[1738]: time="2025-01-30T13:27:24.958820385Z" level=info msg="RemovePodSandbox \"1b250d78170aa260bc1edefbb0877f39c47f664ef1e87bb4a7d86fbe7d107e36\" returns successfully" Jan 30 13:27:24.981137 sshd[5306]: Accepted publickey for core from 10.200.16.10 port 42934 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:27:24.982064 sshd-session[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:27:24.986627 systemd-logind[1699]: New session 28 of user core. Jan 30 13:27:24.994289 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 13:27:25.033608 kubelet[3363]: E0130 13:27:25.033568 3363 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:27:25.345516 containerd[1738]: time="2025-01-30T13:27:25.345480327Z" level=info msg="CreateContainer within sandbox \"c09049c89f80d266d2a80344732888d11397af58cd9ba04e3564c4c1190ed77d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:27:25.370761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2700386407.mount: Deactivated successfully. Jan 30 13:27:25.383043 containerd[1738]: time="2025-01-30T13:27:25.383002572Z" level=info msg="CreateContainer within sandbox \"c09049c89f80d266d2a80344732888d11397af58cd9ba04e3564c4c1190ed77d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"acc7e3f51482fef6211e07e54382561c324b4f88682c33001bc08dbe0d2e41f1\"" Jan 30 13:27:25.383927 containerd[1738]: time="2025-01-30T13:27:25.383898853Z" level=info msg="StartContainer for \"acc7e3f51482fef6211e07e54382561c324b4f88682c33001bc08dbe0d2e41f1\"" Jan 30 13:27:25.409332 systemd[1]: Started cri-containerd-acc7e3f51482fef6211e07e54382561c324b4f88682c33001bc08dbe0d2e41f1.scope - libcontainer container acc7e3f51482fef6211e07e54382561c324b4f88682c33001bc08dbe0d2e41f1. Jan 30 13:27:25.434384 systemd[1]: cri-containerd-acc7e3f51482fef6211e07e54382561c324b4f88682c33001bc08dbe0d2e41f1.scope: Deactivated successfully. Jan 30 13:27:25.445303 containerd[1738]: time="2025-01-30T13:27:25.445184126Z" level=info msg="StartContainer for \"acc7e3f51482fef6211e07e54382561c324b4f88682c33001bc08dbe0d2e41f1\" returns successfully" Jan 30 13:27:25.476644 containerd[1738]: time="2025-01-30T13:27:25.476580164Z" level=info msg="shim disconnected" id=acc7e3f51482fef6211e07e54382561c324b4f88682c33001bc08dbe0d2e41f1 namespace=k8s.io Jan 30 13:27:25.476644 containerd[1738]: time="2025-01-30T13:27:25.476639364Z" level=warning msg="cleaning up after shim disconnected" id=acc7e3f51482fef6211e07e54382561c324b4f88682c33001bc08dbe0d2e41f1 namespace=k8s.io Jan 30 13:27:25.476644 containerd[1738]: time="2025-01-30T13:27:25.476648404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:27:25.838485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acc7e3f51482fef6211e07e54382561c324b4f88682c33001bc08dbe0d2e41f1-rootfs.mount: Deactivated successfully. Jan 30 13:27:25.926409 kubelet[3363]: E0130 13:27:25.926335 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-c4lpx" podUID="39daba86-2b5f-4825-88bb-740a88fed604" Jan 30 13:27:26.350164 containerd[1738]: time="2025-01-30T13:27:26.349607768Z" level=info msg="CreateContainer within sandbox \"c09049c89f80d266d2a80344732888d11397af58cd9ba04e3564c4c1190ed77d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:27:26.377979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3590205151.mount: Deactivated successfully. Jan 30 13:27:26.396502 containerd[1738]: time="2025-01-30T13:27:26.396456024Z" level=info msg="CreateContainer within sandbox \"c09049c89f80d266d2a80344732888d11397af58cd9ba04e3564c4c1190ed77d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5a7c696a6754785b8930d0416536018348616745a783ec249357d44d78ee74bd\"" Jan 30 13:27:26.398063 containerd[1738]: time="2025-01-30T13:27:26.397304185Z" level=info msg="StartContainer for \"5a7c696a6754785b8930d0416536018348616745a783ec249357d44d78ee74bd\"" Jan 30 13:27:26.421273 systemd[1]: Started cri-containerd-5a7c696a6754785b8930d0416536018348616745a783ec249357d44d78ee74bd.scope - libcontainer container 5a7c696a6754785b8930d0416536018348616745a783ec249357d44d78ee74bd. Jan 30 13:27:26.441963 systemd[1]: cri-containerd-5a7c696a6754785b8930d0416536018348616745a783ec249357d44d78ee74bd.scope: Deactivated successfully. Jan 30 13:27:26.448785 containerd[1738]: time="2025-01-30T13:27:26.448661886Z" level=info msg="StartContainer for \"5a7c696a6754785b8930d0416536018348616745a783ec249357d44d78ee74bd\" returns successfully" Jan 30 13:27:26.479316 containerd[1738]: time="2025-01-30T13:27:26.479238643Z" level=info msg="shim disconnected" id=5a7c696a6754785b8930d0416536018348616745a783ec249357d44d78ee74bd namespace=k8s.io Jan 30 13:27:26.479316 containerd[1738]: time="2025-01-30T13:27:26.479300603Z" level=warning msg="cleaning up after shim disconnected" id=5a7c696a6754785b8930d0416536018348616745a783ec249357d44d78ee74bd namespace=k8s.io Jan 30 13:27:26.479316 containerd[1738]: time="2025-01-30T13:27:26.479308843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:27:26.838542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a7c696a6754785b8930d0416536018348616745a783ec249357d44d78ee74bd-rootfs.mount: Deactivated successfully. Jan 30 13:27:27.356981 containerd[1738]: time="2025-01-30T13:27:27.356315292Z" level=info msg="CreateContainer within sandbox \"c09049c89f80d266d2a80344732888d11397af58cd9ba04e3564c4c1190ed77d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:27:27.405521 containerd[1738]: time="2025-01-30T13:27:27.405472351Z" level=info msg="CreateContainer within sandbox \"c09049c89f80d266d2a80344732888d11397af58cd9ba04e3564c4c1190ed77d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"908b945bf4da450612eac157aeae8a0e42177edfb17a8acdd080798601c079cb\"" Jan 30 13:27:27.406256 containerd[1738]: time="2025-01-30T13:27:27.406229712Z" level=info msg="StartContainer for \"908b945bf4da450612eac157aeae8a0e42177edfb17a8acdd080798601c079cb\"" Jan 30 13:27:27.433271 systemd[1]: Started cri-containerd-908b945bf4da450612eac157aeae8a0e42177edfb17a8acdd080798601c079cb.scope - libcontainer container 908b945bf4da450612eac157aeae8a0e42177edfb17a8acdd080798601c079cb. Jan 30 13:27:27.473580 containerd[1738]: time="2025-01-30T13:27:27.473529472Z" level=info msg="StartContainer for \"908b945bf4da450612eac157aeae8a0e42177edfb17a8acdd080798601c079cb\" returns successfully" Jan 30 13:27:27.816339 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 13:27:27.926457 kubelet[3363]: E0130 13:27:27.926374 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-c4lpx" podUID="39daba86-2b5f-4825-88bb-740a88fed604" Jan 30 13:27:29.926859 kubelet[3363]: E0130 13:27:29.926091 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-c4lpx" podUID="39daba86-2b5f-4825-88bb-740a88fed604" Jan 30 13:27:30.557256 systemd-networkd[1535]: lxc_health: Link UP Jan 30 13:27:30.566256 systemd-networkd[1535]: lxc_health: Gained carrier Jan 30 13:27:31.925342 kubelet[3363]: I0130 13:27:31.925279 3363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-54scq" podStartSLOduration=8.925261038 podStartE2EDuration="8.925261038s" podCreationTimestamp="2025-01-30 13:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:27:28.374003069 +0000 UTC m=+183.558932887" watchObservedRunningTime="2025-01-30 13:27:31.925261038 +0000 UTC m=+187.110190856" Jan 30 13:27:31.976262 systemd-networkd[1535]: lxc_health: Gained IPv6LL Jan 30 13:27:35.998148 sshd[5310]: Connection closed by 10.200.16.10 port 42934 Jan 30 13:27:35.998710 sshd-session[5306]: pam_unix(sshd:session): session closed for user core Jan 30 13:27:36.001195 systemd[1]: sshd@25-10.200.20.42:22-10.200.16.10:42934.service: Deactivated successfully. Jan 30 13:27:36.003708 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 13:27:36.005812 systemd-logind[1699]: Session 28 logged out. Waiting for processes to exit. Jan 30 13:27:36.006976 systemd-logind[1699]: Removed session 28.