Dec 13 01:25:50.297543 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:25:50.297567 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:25:50.297575 kernel: KASLR enabled Dec 13 01:25:50.297581 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 13 01:25:50.297588 kernel: printk: bootconsole [pl11] enabled Dec 13 01:25:50.297594 kernel: efi: EFI v2.7 by EDK II Dec 13 01:25:50.297601 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Dec 13 01:25:50.297607 kernel: random: crng init done Dec 13 01:25:50.297613 kernel: ACPI: Early table checksum verification disabled Dec 13 01:25:50.297619 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Dec 13 01:25:50.297625 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:50.297631 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:50.297638 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 13 01:25:50.297644 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:50.297652 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:50.297658 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:50.297665 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:50.297672 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:50.297679 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:50.297685 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 13 01:25:50.297691 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:50.297697 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 13 01:25:50.297704 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 13 01:25:50.297710 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Dec 13 01:25:50.297716 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Dec 13 01:25:50.297723 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Dec 13 01:25:50.297729 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Dec 13 01:25:50.297735 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Dec 13 01:25:50.297743 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Dec 13 01:25:50.297749 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Dec 13 01:25:50.297755 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Dec 13 01:25:50.297762 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Dec 13 01:25:50.297768 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Dec 13 01:25:50.297774 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Dec 13 01:25:50.297796 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Dec 13 01:25:50.297805 kernel: Zone ranges: Dec 13 01:25:50.297811 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 13 01:25:50.297817 kernel: DMA32 empty Dec 13 01:25:50.297823 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:25:50.297830 kernel: Movable zone start for each node Dec 13 01:25:50.297841 kernel: Early memory node ranges Dec 13 01:25:50.297848 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 13 01:25:50.297854 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Dec 13 01:25:50.297861 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Dec 13 01:25:50.297868 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Dec 13 01:25:50.297876 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Dec 13 01:25:50.297883 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Dec 13 01:25:50.297889 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:25:50.297896 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 01:25:50.297903 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 13 01:25:50.297909 kernel: psci: probing for conduit method from ACPI. Dec 13 01:25:50.297916 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:25:50.297923 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:25:50.297929 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 01:25:50.297936 kernel: psci: SMC Calling Convention v1.4 Dec 13 01:25:50.297942 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 13 01:25:50.297949 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 13 01:25:50.297957 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:25:50.297963 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:25:50.297970 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:25:50.297977 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:25:50.297983 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:25:50.297990 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:25:50.297996 kernel: CPU features: detected: Spectre-BHB Dec 13 01:25:50.298003 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:25:50.298010 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:25:50.298016 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:25:50.298023 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Dec 13 01:25:50.298031 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:25:50.298037 kernel: alternatives: applying boot alternatives Dec 13 01:25:50.298046 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:25:50.298053 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:25:50.298060 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:25:50.298066 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:25:50.298073 kernel: Fallback order for Node 0: 0 Dec 13 01:25:50.298079 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Dec 13 01:25:50.298086 kernel: Policy zone: Normal Dec 13 01:25:50.298092 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:25:50.298099 kernel: software IO TLB: area num 2. Dec 13 01:25:50.298107 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Dec 13 01:25:50.298114 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Dec 13 01:25:50.298121 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:25:50.298127 kernel: trace event string verifier disabled Dec 13 01:25:50.298134 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:25:50.298141 kernel: rcu: RCU event tracing is enabled. Dec 13 01:25:50.298148 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:25:50.298155 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:25:50.298162 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:25:50.298168 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:25:50.298175 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:25:50.298183 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:25:50.298190 kernel: GICv3: 960 SPIs implemented Dec 13 01:25:50.298196 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:25:50.298203 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:25:50.298210 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:25:50.298216 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 13 01:25:50.298223 kernel: ITS: No ITS available, not enabling LPIs Dec 13 01:25:50.298230 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:25:50.298237 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:25:50.298244 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:25:50.298251 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:25:50.298258 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:25:50.298266 kernel: Console: colour dummy device 80x25 Dec 13 01:25:50.298274 kernel: printk: console [tty1] enabled Dec 13 01:25:50.298281 kernel: ACPI: Core revision 20230628 Dec 13 01:25:50.298288 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:25:50.298295 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:25:50.298302 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:25:50.298309 kernel: landlock: Up and running. Dec 13 01:25:50.298315 kernel: SELinux: Initializing. Dec 13 01:25:50.298322 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:25:50.298331 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:25:50.298338 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:25:50.298345 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:25:50.298352 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Dec 13 01:25:50.298358 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Dec 13 01:25:50.298365 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 01:25:50.298372 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:25:50.298386 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:25:50.298393 kernel: Remapping and enabling EFI services. Dec 13 01:25:50.298400 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:25:50.298407 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:25:50.298415 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 13 01:25:50.298423 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:25:50.298430 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:25:50.298437 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:25:50.298444 kernel: SMP: Total of 2 processors activated. Dec 13 01:25:50.298451 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:25:50.298460 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 13 01:25:50.298468 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:25:50.298475 kernel: CPU features: detected: CRC32 instructions Dec 13 01:25:50.298482 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:25:50.298489 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:25:50.298496 kernel: CPU features: detected: Privileged Access Never Dec 13 01:25:50.298503 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:25:50.298510 kernel: alternatives: applying system-wide alternatives Dec 13 01:25:50.298518 kernel: devtmpfs: initialized Dec 13 01:25:50.298526 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:25:50.298533 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:25:50.298541 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:25:50.298548 kernel: SMBIOS 3.1.0 present. Dec 13 01:25:50.298555 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Dec 13 01:25:50.298562 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:25:50.298569 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:25:50.298577 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:25:50.298585 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:25:50.298593 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:25:50.298600 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Dec 13 01:25:50.298607 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:25:50.298614 kernel: cpuidle: using governor menu Dec 13 01:25:50.298622 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:25:50.298629 kernel: ASID allocator initialised with 32768 entries Dec 13 01:25:50.298636 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:25:50.298644 kernel: Serial: AMBA PL011 UART driver Dec 13 01:25:50.298652 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:25:50.298659 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:25:50.298666 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:25:50.298674 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:25:50.298681 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:25:50.298688 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:25:50.298695 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:25:50.298702 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:25:50.298710 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:25:50.298718 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:25:50.298725 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:25:50.298732 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:25:50.298740 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:25:50.298747 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:25:50.298754 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:25:50.298761 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:25:50.298768 kernel: ACPI: Interpreter enabled Dec 13 01:25:50.298775 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:25:50.303540 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:25:50.303558 kernel: printk: console [ttyAMA0] enabled Dec 13 01:25:50.303566 kernel: printk: bootconsole [pl11] disabled Dec 13 01:25:50.303574 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 13 01:25:50.303581 kernel: iommu: Default domain type: Translated Dec 13 01:25:50.303588 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:25:50.303596 kernel: efivars: Registered efivars operations Dec 13 01:25:50.303603 kernel: vgaarb: loaded Dec 13 01:25:50.303611 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:25:50.303618 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:25:50.303627 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:25:50.303634 kernel: pnp: PnP ACPI init Dec 13 01:25:50.303642 kernel: pnp: PnP ACPI: found 0 devices Dec 13 01:25:50.303649 kernel: NET: Registered PF_INET protocol family Dec 13 01:25:50.303656 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:25:50.303664 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:25:50.303671 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:25:50.303679 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:25:50.303688 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:25:50.303695 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:25:50.303702 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:25:50.303710 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:25:50.303717 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:25:50.303725 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:25:50.303732 kernel: kvm [1]: HYP mode not available Dec 13 01:25:50.303739 kernel: Initialise system trusted keyrings Dec 13 01:25:50.303747 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:25:50.303755 kernel: Key type asymmetric registered Dec 13 01:25:50.303762 kernel: Asymmetric key parser 'x509' registered Dec 13 01:25:50.303770 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:25:50.303777 kernel: io scheduler mq-deadline registered Dec 13 01:25:50.303794 kernel: io scheduler kyber registered Dec 13 01:25:50.303802 kernel: io scheduler bfq registered Dec 13 01:25:50.303809 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:25:50.303817 kernel: thunder_xcv, ver 1.0 Dec 13 01:25:50.303824 kernel: thunder_bgx, ver 1.0 Dec 13 01:25:50.303831 kernel: nicpf, ver 1.0 Dec 13 01:25:50.303840 kernel: nicvf, ver 1.0 Dec 13 01:25:50.303987 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:25:50.304062 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:25:49 UTC (1734053149) Dec 13 01:25:50.304073 kernel: efifb: probing for efifb Dec 13 01:25:50.304080 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 01:25:50.304088 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 01:25:50.304095 kernel: efifb: scrolling: redraw Dec 13 01:25:50.304105 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:25:50.304112 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:25:50.304119 kernel: fb0: EFI VGA frame buffer device Dec 13 01:25:50.304127 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 13 01:25:50.304134 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:25:50.304141 kernel: No ACPI PMU IRQ for CPU0 Dec 13 01:25:50.304148 kernel: No ACPI PMU IRQ for CPU1 Dec 13 01:25:50.304156 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Dec 13 01:25:50.304163 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:25:50.304172 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:25:50.304179 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:25:50.304187 kernel: Segment Routing with IPv6 Dec 13 01:25:50.304206 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:25:50.304215 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:25:50.304223 kernel: Key type dns_resolver registered Dec 13 01:25:50.304230 kernel: registered taskstats version 1 Dec 13 01:25:50.304237 kernel: Loading compiled-in X.509 certificates Dec 13 01:25:50.304245 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:25:50.304252 kernel: Key type .fscrypt registered Dec 13 01:25:50.304261 kernel: Key type fscrypt-provisioning registered Dec 13 01:25:50.304269 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:25:50.304276 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:25:50.304284 kernel: ima: No architecture policies found Dec 13 01:25:50.304291 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:25:50.304298 kernel: clk: Disabling unused clocks Dec 13 01:25:50.304305 kernel: Freeing unused kernel memory: 39360K Dec 13 01:25:50.304312 kernel: Run /init as init process Dec 13 01:25:50.304321 kernel: with arguments: Dec 13 01:25:50.304329 kernel: /init Dec 13 01:25:50.304336 kernel: with environment: Dec 13 01:25:50.304343 kernel: HOME=/ Dec 13 01:25:50.304350 kernel: TERM=linux Dec 13 01:25:50.304357 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:25:50.304367 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:25:50.304376 systemd[1]: Detected virtualization microsoft. Dec 13 01:25:50.304386 systemd[1]: Detected architecture arm64. Dec 13 01:25:50.304393 systemd[1]: Running in initrd. Dec 13 01:25:50.304401 systemd[1]: No hostname configured, using default hostname. Dec 13 01:25:50.304408 systemd[1]: Hostname set to . Dec 13 01:25:50.304417 systemd[1]: Initializing machine ID from random generator. Dec 13 01:25:50.304424 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:25:50.304432 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:50.304440 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:50.304451 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:25:50.304459 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:25:50.304467 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:25:50.304481 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:25:50.304491 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:25:50.304500 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:25:50.304508 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:50.304518 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:50.304526 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:25:50.304534 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:25:50.304542 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:25:50.304549 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:25:50.304557 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:25:50.304565 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:25:50.304573 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:25:50.304582 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:25:50.304590 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:50.304598 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:25:50.304606 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:25:50.304614 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:25:50.304623 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:25:50.304631 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:25:50.304638 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:25:50.304646 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:25:50.304656 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:25:50.304670 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:25:50.304697 systemd-journald[217]: Collecting audit messages is disabled. Dec 13 01:25:50.304718 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:50.304729 systemd-journald[217]: Journal started Dec 13 01:25:50.304747 systemd-journald[217]: Runtime Journal (/run/log/journal/d4a46c9442554cf9b6ece953f35ec174) is 8.0M, max 78.5M, 70.5M free. Dec 13 01:25:50.305407 systemd-modules-load[218]: Inserted module 'overlay' Dec 13 01:25:50.324800 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:25:50.323605 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:25:50.340810 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:25:50.376301 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:25:50.376325 kernel: Bridge firewalling registered Dec 13 01:25:50.364239 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:25:50.375668 systemd-modules-load[218]: Inserted module 'br_netfilter' Dec 13 01:25:50.381272 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:25:50.392380 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:50.417694 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:50.432962 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:25:50.446167 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:25:50.460959 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:25:50.479360 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:25:50.488727 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:25:50.501004 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:50.516204 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:25:50.543030 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:25:50.558397 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:25:50.564932 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:25:50.587533 dracut-cmdline[250]: dracut-dracut-053 Dec 13 01:25:50.592474 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:25:50.623119 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:25:50.648416 systemd-resolved[259]: Positive Trust Anchors: Dec 13 01:25:50.648436 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:25:50.648468 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:25:50.650841 systemd-resolved[259]: Defaulting to hostname 'linux'. Dec 13 01:25:50.652930 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:25:50.667730 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:50.770813 kernel: SCSI subsystem initialized Dec 13 01:25:50.778800 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:25:50.788806 kernel: iscsi: registered transport (tcp) Dec 13 01:25:50.806201 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:25:50.806221 kernel: QLogic iSCSI HBA Driver Dec 13 01:25:50.845057 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:25:50.861999 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:25:50.893119 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:25:50.893192 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:25:50.898870 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:25:50.946807 kernel: raid6: neonx8 gen() 15754 MB/s Dec 13 01:25:50.966800 kernel: raid6: neonx4 gen() 15669 MB/s Dec 13 01:25:50.986791 kernel: raid6: neonx2 gen() 13261 MB/s Dec 13 01:25:51.007798 kernel: raid6: neonx1 gen() 10507 MB/s Dec 13 01:25:51.027790 kernel: raid6: int64x8 gen() 6979 MB/s Dec 13 01:25:51.047792 kernel: raid6: int64x4 gen() 7354 MB/s Dec 13 01:25:51.068793 kernel: raid6: int64x2 gen() 6133 MB/s Dec 13 01:25:51.092486 kernel: raid6: int64x1 gen() 5062 MB/s Dec 13 01:25:51.092506 kernel: raid6: using algorithm neonx8 gen() 15754 MB/s Dec 13 01:25:51.115902 kernel: raid6: .... xor() 11949 MB/s, rmw enabled Dec 13 01:25:51.115920 kernel: raid6: using neon recovery algorithm Dec 13 01:25:51.128906 kernel: xor: measuring software checksum speed Dec 13 01:25:51.128925 kernel: 8regs : 19764 MB/sec Dec 13 01:25:51.132666 kernel: 32regs : 19617 MB/sec Dec 13 01:25:51.136400 kernel: arm64_neon : 27061 MB/sec Dec 13 01:25:51.141031 kernel: xor: using function: arm64_neon (27061 MB/sec) Dec 13 01:25:51.191801 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:25:51.201349 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:25:51.217974 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:25:51.240119 systemd-udevd[437]: Using default interface naming scheme 'v255'. Dec 13 01:25:51.246684 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:25:51.264072 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:25:51.281834 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Dec 13 01:25:51.309039 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:25:51.325059 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:25:51.365060 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:51.389019 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:25:51.417062 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:25:51.428038 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:25:51.450601 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:51.473192 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:25:51.495939 kernel: hv_vmbus: Vmbus version:5.3 Dec 13 01:25:51.509243 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:25:51.509330 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 01:25:51.509345 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 01:25:51.509219 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:25:51.574253 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 13 01:25:51.574277 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 01:25:51.574991 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:25:51.575005 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 13 01:25:51.575016 kernel: PTP clock support registered Dec 13 01:25:51.575032 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 01:25:51.575295 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:25:51.595871 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 01:25:51.595893 kernel: scsi host0: storvsc_host_t Dec 13 01:25:51.597105 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:25:51.626300 kernel: scsi host1: storvsc_host_t Dec 13 01:25:51.626473 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 01:25:51.626495 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 01:25:51.597313 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:51.626428 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:51.681526 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 01:25:51.681558 kernel: hv_vmbus: registering driver hv_utils Dec 13 01:25:51.681568 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 01:25:51.681577 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 01:25:51.681586 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 01:25:51.632939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:51.633223 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:51.664743 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:51.768436 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 01:25:51.801174 kernel: hv_netvsc 000d3af6-5667-000d-3af6-5667000d3af6 eth0: VF slot 1 added Dec 13 01:25:51.801315 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:25:51.801327 kernel: hv_vmbus: registering driver hv_pci Dec 13 01:25:51.801338 kernel: hv_pci f1da8e40-1385-4cf9-9f2c-962569515642: PCI VMBus probing: Using version 0x10004 Dec 13 01:25:51.920995 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 01:25:51.921526 kernel: hv_pci f1da8e40-1385-4cf9-9f2c-962569515642: PCI host bridge to bus 1385:00 Dec 13 01:25:51.921642 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 01:25:51.921753 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 01:25:51.921838 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:25:51.921919 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 01:25:51.922000 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 01:25:51.922097 kernel: pci_bus 1385:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 13 01:25:51.922193 kernel: pci_bus 1385:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 01:25:51.922279 kernel: pci 1385:00:02.0: [15b3:1018] type 00 class 0x020000 Dec 13 01:25:51.922383 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:51.922393 kernel: pci 1385:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:25:51.922488 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:25:51.922572 kernel: pci 1385:00:02.0: enabling Extended Tags Dec 13 01:25:51.922675 kernel: pci 1385:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1385:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Dec 13 01:25:51.922763 kernel: pci_bus 1385:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 01:25:51.922838 kernel: pci 1385:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:25:51.732957 systemd-resolved[259]: Clock change detected. Flushing caches. Dec 13 01:25:51.768338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:51.790929 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:51.791060 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:51.833842 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:51.894737 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:51.982846 kernel: mlx5_core 1385:00:02.0: enabling device (0000 -> 0002) Dec 13 01:25:52.198361 kernel: mlx5_core 1385:00:02.0: firmware version: 16.30.1284 Dec 13 01:25:52.198508 kernel: hv_netvsc 000d3af6-5667-000d-3af6-5667000d3af6 eth0: VF registering: eth1 Dec 13 01:25:52.198600 kernel: mlx5_core 1385:00:02.0 eth1: joined to eth0 Dec 13 01:25:52.198692 kernel: mlx5_core 1385:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 13 01:25:51.923300 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:51.985733 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:52.219426 kernel: mlx5_core 1385:00:02.0 enP4997s1: renamed from eth1 Dec 13 01:25:52.268011 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 01:25:52.351075 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (490) Dec 13 01:25:52.368333 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:25:52.397977 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 01:25:52.415073 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (496) Dec 13 01:25:52.421699 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 01:25:52.428250 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 01:25:52.461370 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:25:52.487066 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:52.494067 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:53.503395 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:53.503450 disk-uuid[601]: The operation has completed successfully. Dec 13 01:25:53.556632 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:25:53.556730 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:25:53.589192 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:25:53.602100 sh[687]: Success Dec 13 01:25:53.632085 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:25:53.808464 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:25:53.814306 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:25:53.829187 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:25:53.862600 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:25:53.862663 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:53.869285 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:25:53.874257 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:25:53.878255 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:25:54.157770 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:25:54.163165 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:25:54.179318 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:25:54.187233 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:25:54.223526 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:54.223581 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:54.228328 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:54.254069 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:54.262319 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:25:54.274274 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:54.281323 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:25:54.293319 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:25:54.317640 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:25:54.335195 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:25:54.362076 systemd-networkd[871]: lo: Link UP Dec 13 01:25:54.362085 systemd-networkd[871]: lo: Gained carrier Dec 13 01:25:54.363691 systemd-networkd[871]: Enumeration completed Dec 13 01:25:54.363782 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:25:54.372678 systemd[1]: Reached target network.target - Network. Dec 13 01:25:54.376586 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:54.376589 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:25:54.454081 kernel: mlx5_core 1385:00:02.0 enP4997s1: Link up Dec 13 01:25:54.491070 kernel: hv_netvsc 000d3af6-5667-000d-3af6-5667000d3af6 eth0: Data path switched to VF: enP4997s1 Dec 13 01:25:54.491495 systemd-networkd[871]: enP4997s1: Link UP Dec 13 01:25:54.491624 systemd-networkd[871]: eth0: Link UP Dec 13 01:25:54.491728 systemd-networkd[871]: eth0: Gained carrier Dec 13 01:25:54.491736 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:54.499292 systemd-networkd[871]: enP4997s1: Gained carrier Dec 13 01:25:54.524100 systemd-networkd[871]: eth0: DHCPv4 address 10.200.20.42/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:25:55.193013 ignition[858]: Ignition 2.19.0 Dec 13 01:25:55.196399 ignition[858]: Stage: fetch-offline Dec 13 01:25:55.196454 ignition[858]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:55.198372 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:25:55.196464 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:55.214188 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:25:55.196558 ignition[858]: parsed url from cmdline: "" Dec 13 01:25:55.196561 ignition[858]: no config URL provided Dec 13 01:25:55.196566 ignition[858]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:25:55.196573 ignition[858]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:25:55.196579 ignition[858]: failed to fetch config: resource requires networking Dec 13 01:25:55.196752 ignition[858]: Ignition finished successfully Dec 13 01:25:55.229740 ignition[881]: Ignition 2.19.0 Dec 13 01:25:55.229746 ignition[881]: Stage: fetch Dec 13 01:25:55.229947 ignition[881]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:55.229960 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:55.234212 ignition[881]: parsed url from cmdline: "" Dec 13 01:25:55.234217 ignition[881]: no config URL provided Dec 13 01:25:55.234235 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:25:55.234250 ignition[881]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:25:55.234275 ignition[881]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 01:25:55.346129 ignition[881]: GET result: OK Dec 13 01:25:55.346220 ignition[881]: config has been read from IMDS userdata Dec 13 01:25:55.346294 ignition[881]: parsing config with SHA512: f5ea8ca8b5bfef601731b4d0a7fa48c6e2d109e43c3f32f453587cdb195ca4a7e90ad6240608dea8d8e066325634f5620b0799829d89c19e349675a23f76dd9c Dec 13 01:25:55.350669 unknown[881]: fetched base config from "system" Dec 13 01:25:55.351141 ignition[881]: fetch: fetch complete Dec 13 01:25:55.350677 unknown[881]: fetched base config from "system" Dec 13 01:25:55.351146 ignition[881]: fetch: fetch passed Dec 13 01:25:55.350682 unknown[881]: fetched user config from "azure" Dec 13 01:25:55.351192 ignition[881]: Ignition finished successfully Dec 13 01:25:55.357081 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:25:55.374294 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:25:55.390604 ignition[888]: Ignition 2.19.0 Dec 13 01:25:55.400119 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:25:55.390619 ignition[888]: Stage: kargs Dec 13 01:25:55.390862 ignition[888]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:55.390872 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:55.392312 ignition[888]: kargs: kargs passed Dec 13 01:25:55.392378 ignition[888]: Ignition finished successfully Dec 13 01:25:55.427354 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:25:55.448453 ignition[894]: Ignition 2.19.0 Dec 13 01:25:55.449089 ignition[894]: Stage: disks Dec 13 01:25:55.449278 ignition[894]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:55.454632 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:25:55.449288 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:55.462385 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:25:55.450306 ignition[894]: disks: disks passed Dec 13 01:25:55.472298 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:25:55.450352 ignition[894]: Ignition finished successfully Dec 13 01:25:55.483682 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:25:55.494378 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:25:55.505233 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:25:55.529304 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:25:55.589221 systemd-fsck[902]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 01:25:55.597136 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:25:55.617269 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:25:55.671068 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:25:55.672315 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:25:55.676745 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:25:55.716120 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:25:55.725365 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:25:55.743449 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:25:55.756728 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (913) Dec 13 01:25:55.751633 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:25:55.791075 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:55.791097 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:55.791108 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:55.751673 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:25:55.770837 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:25:55.813066 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:55.813339 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:25:55.820085 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:25:56.080377 systemd-networkd[871]: enP4997s1: Gained IPv6LL Dec 13 01:25:56.144372 systemd-networkd[871]: eth0: Gained IPv6LL Dec 13 01:25:56.255749 coreos-metadata[915]: Dec 13 01:25:56.255 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:25:56.265898 coreos-metadata[915]: Dec 13 01:25:56.265 INFO Fetch successful Dec 13 01:25:56.265898 coreos-metadata[915]: Dec 13 01:25:56.265 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:25:56.294295 coreos-metadata[915]: Dec 13 01:25:56.294 INFO Fetch successful Dec 13 01:25:56.299971 coreos-metadata[915]: Dec 13 01:25:56.296 INFO wrote hostname ci-4081.2.1-a-16a3da9678 to /sysroot/etc/hostname Dec 13 01:25:56.300365 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:25:56.431423 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:25:56.440220 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:25:56.448971 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:25:56.470826 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:25:57.091884 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:25:57.107275 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:25:57.116234 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:25:57.131561 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:25:57.141717 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:57.160210 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:25:57.177191 ignition[1031]: INFO : Ignition 2.19.0 Dec 13 01:25:57.177191 ignition[1031]: INFO : Stage: mount Dec 13 01:25:57.185251 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:57.185251 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:57.185251 ignition[1031]: INFO : mount: mount passed Dec 13 01:25:57.185251 ignition[1031]: INFO : Ignition finished successfully Dec 13 01:25:57.182765 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:25:57.208246 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:25:57.227288 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:25:57.262211 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1043) Dec 13 01:25:57.262274 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:57.267960 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:57.272101 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:57.278062 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:57.279992 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:25:57.306056 ignition[1061]: INFO : Ignition 2.19.0 Dec 13 01:25:57.306056 ignition[1061]: INFO : Stage: files Dec 13 01:25:57.313477 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:57.313477 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:57.313477 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:25:57.337056 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:25:57.337056 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:25:57.389693 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:25:57.397106 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:25:57.397106 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:25:57.390141 unknown[1061]: wrote ssh authorized keys file for user: core Dec 13 01:25:57.415851 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:25:57.415851 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:25:57.415851 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:25:57.415851 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:25:57.588969 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:25:57.851694 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:25:57.851694 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:25:57.871827 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 01:25:58.297685 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Dec 13 01:25:58.367695 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 01:25:58.791407 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Dec 13 01:25:59.021725 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:59.021725 ignition[1061]: INFO : files: op(d): [started] processing unit "containerd.service" Dec 13 01:25:59.045803 ignition[1061]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: op(d): [finished] processing unit "containerd.service" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: files passed Dec 13 01:25:59.062029 ignition[1061]: INFO : Ignition finished successfully Dec 13 01:25:59.061719 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:25:59.110359 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:25:59.129237 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:25:59.150081 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:25:59.244597 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:59.244597 initrd-setup-root-after-ignition[1087]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:59.150174 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:25:59.267717 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:59.177530 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:25:59.192489 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:25:59.213275 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:25:59.264280 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:25:59.266076 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:25:59.275139 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:25:59.289564 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:25:59.303138 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:25:59.322306 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:25:59.367606 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:25:59.384323 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:25:59.404636 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:59.416229 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:59.423279 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:25:59.433789 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:25:59.433967 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:25:59.449372 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:25:59.461644 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:25:59.472193 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:25:59.483028 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:25:59.501965 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:25:59.514387 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:25:59.525493 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:25:59.532565 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:25:59.544915 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:25:59.560228 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:25:59.574536 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:25:59.574706 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:25:59.588734 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:59.594785 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:59.606135 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:25:59.606247 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:59.619034 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:25:59.619228 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:25:59.635334 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:25:59.635504 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:25:59.649193 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:25:59.649355 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:25:59.659306 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:25:59.659469 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:25:59.696182 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:25:59.731092 ignition[1112]: INFO : Ignition 2.19.0 Dec 13 01:25:59.731092 ignition[1112]: INFO : Stage: umount Dec 13 01:25:59.731092 ignition[1112]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:59.731092 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:59.731092 ignition[1112]: INFO : umount: umount passed Dec 13 01:25:59.731092 ignition[1112]: INFO : Ignition finished successfully Dec 13 01:25:59.713210 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:25:59.724457 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:25:59.724630 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:59.735209 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:25:59.735328 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:25:59.751443 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:25:59.752067 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:25:59.764860 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:25:59.764960 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:25:59.774624 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:25:59.774677 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:25:59.780203 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:25:59.780242 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:25:59.789915 systemd[1]: Stopped target network.target - Network. Dec 13 01:25:59.799230 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:25:59.799282 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:25:59.810721 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:25:59.815652 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:25:59.815721 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:59.830210 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:25:59.840356 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:25:59.852778 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:25:59.852831 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:25:59.863498 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:25:59.863546 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:25:59.874415 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:25:59.874468 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:25:59.884897 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:25:59.884941 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:25:59.896351 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:25:59.906329 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:25:59.927116 systemd-networkd[871]: eth0: DHCPv6 lease lost Dec 13 01:25:59.927734 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:25:59.928546 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:25:59.928637 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:25:59.941136 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:25:59.941255 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:26:00.176071 kernel: hv_netvsc 000d3af6-5667-000d-3af6-5667000d3af6 eth0: Data path switched from VF: enP4997s1 Dec 13 01:25:59.953814 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:25:59.953979 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:25:59.964634 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:25:59.964840 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:25:59.978035 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:25:59.978116 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:59.986315 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:25:59.986383 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:26:00.012232 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:26:00.021342 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:26:00.021414 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:26:00.033341 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:26:00.033401 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:00.043794 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:26:00.043840 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:00.054619 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:26:00.054667 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:00.066695 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:00.103227 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:26:00.103370 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:00.116142 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:26:00.116189 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:00.125401 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:26:00.125431 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:00.135598 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:26:00.135650 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:26:00.159935 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:26:00.160006 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:26:00.176107 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:26:00.176192 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:00.205301 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:26:00.220789 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:26:00.220867 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:00.233895 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:26:00.233950 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:00.246383 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:26:00.246435 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:00.463083 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Dec 13 01:26:00.258397 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:00.258446 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:00.269770 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:26:00.269857 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:26:00.279904 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:26:00.279981 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:26:00.298325 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:26:00.325290 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:26:00.399578 systemd[1]: Switching root. Dec 13 01:26:00.511843 systemd-journald[217]: Journal stopped Dec 13 01:25:50.297543 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:25:50.297567 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:25:50.297575 kernel: KASLR enabled Dec 13 01:25:50.297581 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 13 01:25:50.297588 kernel: printk: bootconsole [pl11] enabled Dec 13 01:25:50.297594 kernel: efi: EFI v2.7 by EDK II Dec 13 01:25:50.297601 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Dec 13 01:25:50.297607 kernel: random: crng init done Dec 13 01:25:50.297613 kernel: ACPI: Early table checksum verification disabled Dec 13 01:25:50.297619 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Dec 13 01:25:50.297625 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:50.297631 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:50.297638 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 13 01:25:50.297644 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:50.297652 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:50.297658 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:50.297665 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:50.297672 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:50.297679 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:50.297685 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 13 01:25:50.297691 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:50.297697 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 13 01:25:50.297704 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 13 01:25:50.297710 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Dec 13 01:25:50.297716 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Dec 13 01:25:50.297723 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Dec 13 01:25:50.297729 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Dec 13 01:25:50.297735 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Dec 13 01:25:50.297743 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Dec 13 01:25:50.297749 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Dec 13 01:25:50.297755 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Dec 13 01:25:50.297762 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Dec 13 01:25:50.297768 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Dec 13 01:25:50.297774 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Dec 13 01:25:50.297796 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Dec 13 01:25:50.297805 kernel: Zone ranges: Dec 13 01:25:50.297811 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 13 01:25:50.297817 kernel: DMA32 empty Dec 13 01:25:50.297823 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:25:50.297830 kernel: Movable zone start for each node Dec 13 01:25:50.297841 kernel: Early memory node ranges Dec 13 01:25:50.297848 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 13 01:25:50.297854 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Dec 13 01:25:50.297861 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Dec 13 01:25:50.297868 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Dec 13 01:25:50.297876 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Dec 13 01:25:50.297883 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Dec 13 01:25:50.297889 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:25:50.297896 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 01:25:50.297903 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 13 01:25:50.297909 kernel: psci: probing for conduit method from ACPI. Dec 13 01:25:50.297916 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:25:50.297923 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:25:50.297929 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 01:25:50.297936 kernel: psci: SMC Calling Convention v1.4 Dec 13 01:25:50.297942 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 13 01:25:50.297949 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 13 01:25:50.297957 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:25:50.297963 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:25:50.297970 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:25:50.297977 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:25:50.297983 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:25:50.297990 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:25:50.297996 kernel: CPU features: detected: Spectre-BHB Dec 13 01:25:50.298003 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:25:50.298010 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:25:50.298016 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:25:50.298023 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Dec 13 01:25:50.298031 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:25:50.298037 kernel: alternatives: applying boot alternatives Dec 13 01:25:50.298046 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:25:50.298053 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:25:50.298060 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:25:50.298066 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:25:50.298073 kernel: Fallback order for Node 0: 0 Dec 13 01:25:50.298079 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Dec 13 01:25:50.298086 kernel: Policy zone: Normal Dec 13 01:25:50.298092 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:25:50.298099 kernel: software IO TLB: area num 2. Dec 13 01:25:50.298107 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Dec 13 01:25:50.298114 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Dec 13 01:25:50.298121 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:25:50.298127 kernel: trace event string verifier disabled Dec 13 01:25:50.298134 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:25:50.298141 kernel: rcu: RCU event tracing is enabled. Dec 13 01:25:50.298148 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:25:50.298155 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:25:50.298162 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:25:50.298168 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:25:50.298175 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:25:50.298183 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:25:50.298190 kernel: GICv3: 960 SPIs implemented Dec 13 01:25:50.298196 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:25:50.298203 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:25:50.298210 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:25:50.298216 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 13 01:25:50.298223 kernel: ITS: No ITS available, not enabling LPIs Dec 13 01:25:50.298230 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:25:50.298237 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:25:50.298244 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:25:50.298251 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:25:50.298258 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:25:50.298266 kernel: Console: colour dummy device 80x25 Dec 13 01:25:50.298274 kernel: printk: console [tty1] enabled Dec 13 01:25:50.298281 kernel: ACPI: Core revision 20230628 Dec 13 01:25:50.298288 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:25:50.298295 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:25:50.298302 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:25:50.298309 kernel: landlock: Up and running. Dec 13 01:25:50.298315 kernel: SELinux: Initializing. Dec 13 01:25:50.298322 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:25:50.298331 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:25:50.298338 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:25:50.298345 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:25:50.298352 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Dec 13 01:25:50.298358 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Dec 13 01:25:50.298365 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 01:25:50.298372 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:25:50.298386 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:25:50.298393 kernel: Remapping and enabling EFI services. Dec 13 01:25:50.298400 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:25:50.298407 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:25:50.298415 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 13 01:25:50.298423 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:25:50.298430 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:25:50.298437 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:25:50.298444 kernel: SMP: Total of 2 processors activated. Dec 13 01:25:50.298451 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:25:50.298460 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 13 01:25:50.298468 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:25:50.298475 kernel: CPU features: detected: CRC32 instructions Dec 13 01:25:50.298482 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:25:50.298489 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:25:50.298496 kernel: CPU features: detected: Privileged Access Never Dec 13 01:25:50.298503 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:25:50.298510 kernel: alternatives: applying system-wide alternatives Dec 13 01:25:50.298518 kernel: devtmpfs: initialized Dec 13 01:25:50.298526 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:25:50.298533 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:25:50.298541 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:25:50.298548 kernel: SMBIOS 3.1.0 present. Dec 13 01:25:50.298555 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Dec 13 01:25:50.298562 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:25:50.298569 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:25:50.298577 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:25:50.298585 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:25:50.298593 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:25:50.298600 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Dec 13 01:25:50.298607 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:25:50.298614 kernel: cpuidle: using governor menu Dec 13 01:25:50.298622 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:25:50.298629 kernel: ASID allocator initialised with 32768 entries Dec 13 01:25:50.298636 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:25:50.298644 kernel: Serial: AMBA PL011 UART driver Dec 13 01:25:50.298652 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:25:50.298659 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:25:50.298666 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:25:50.298674 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:25:50.298681 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:25:50.298688 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:25:50.298695 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:25:50.298702 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:25:50.298710 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:25:50.298718 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:25:50.298725 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:25:50.298732 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:25:50.298740 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:25:50.298747 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:25:50.298754 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:25:50.298761 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:25:50.298768 kernel: ACPI: Interpreter enabled Dec 13 01:25:50.298775 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:25:50.303540 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:25:50.303558 kernel: printk: console [ttyAMA0] enabled Dec 13 01:25:50.303566 kernel: printk: bootconsole [pl11] disabled Dec 13 01:25:50.303574 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 13 01:25:50.303581 kernel: iommu: Default domain type: Translated Dec 13 01:25:50.303588 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:25:50.303596 kernel: efivars: Registered efivars operations Dec 13 01:25:50.303603 kernel: vgaarb: loaded Dec 13 01:25:50.303611 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:25:50.303618 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:25:50.303627 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:25:50.303634 kernel: pnp: PnP ACPI init Dec 13 01:25:50.303642 kernel: pnp: PnP ACPI: found 0 devices Dec 13 01:25:50.303649 kernel: NET: Registered PF_INET protocol family Dec 13 01:25:50.303656 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:25:50.303664 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:25:50.303671 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:25:50.303679 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:25:50.303688 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:25:50.303695 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:25:50.303702 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:25:50.303710 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:25:50.303717 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:25:50.303725 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:25:50.303732 kernel: kvm [1]: HYP mode not available Dec 13 01:25:50.303739 kernel: Initialise system trusted keyrings Dec 13 01:25:50.303747 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:25:50.303755 kernel: Key type asymmetric registered Dec 13 01:25:50.303762 kernel: Asymmetric key parser 'x509' registered Dec 13 01:25:50.303770 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:25:50.303777 kernel: io scheduler mq-deadline registered Dec 13 01:25:50.303794 kernel: io scheduler kyber registered Dec 13 01:25:50.303802 kernel: io scheduler bfq registered Dec 13 01:25:50.303809 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:25:50.303817 kernel: thunder_xcv, ver 1.0 Dec 13 01:25:50.303824 kernel: thunder_bgx, ver 1.0 Dec 13 01:25:50.303831 kernel: nicpf, ver 1.0 Dec 13 01:25:50.303840 kernel: nicvf, ver 1.0 Dec 13 01:25:50.303987 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:25:50.304062 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:25:49 UTC (1734053149) Dec 13 01:25:50.304073 kernel: efifb: probing for efifb Dec 13 01:25:50.304080 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 01:25:50.304088 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 01:25:50.304095 kernel: efifb: scrolling: redraw Dec 13 01:25:50.304105 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:25:50.304112 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:25:50.304119 kernel: fb0: EFI VGA frame buffer device Dec 13 01:25:50.304127 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 13 01:25:50.304134 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:25:50.304141 kernel: No ACPI PMU IRQ for CPU0 Dec 13 01:25:50.304148 kernel: No ACPI PMU IRQ for CPU1 Dec 13 01:25:50.304156 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Dec 13 01:25:50.304163 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:25:50.304172 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:25:50.304179 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:25:50.304187 kernel: Segment Routing with IPv6 Dec 13 01:25:50.304206 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:25:50.304215 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:25:50.304223 kernel: Key type dns_resolver registered Dec 13 01:25:50.304230 kernel: registered taskstats version 1 Dec 13 01:25:50.304237 kernel: Loading compiled-in X.509 certificates Dec 13 01:25:50.304245 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:25:50.304252 kernel: Key type .fscrypt registered Dec 13 01:25:50.304261 kernel: Key type fscrypt-provisioning registered Dec 13 01:25:50.304269 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:25:50.304276 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:25:50.304284 kernel: ima: No architecture policies found Dec 13 01:25:50.304291 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:25:50.304298 kernel: clk: Disabling unused clocks Dec 13 01:25:50.304305 kernel: Freeing unused kernel memory: 39360K Dec 13 01:25:50.304312 kernel: Run /init as init process Dec 13 01:25:50.304321 kernel: with arguments: Dec 13 01:25:50.304329 kernel: /init Dec 13 01:25:50.304336 kernel: with environment: Dec 13 01:25:50.304343 kernel: HOME=/ Dec 13 01:25:50.304350 kernel: TERM=linux Dec 13 01:25:50.304357 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:25:50.304367 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:25:50.304376 systemd[1]: Detected virtualization microsoft. Dec 13 01:25:50.304386 systemd[1]: Detected architecture arm64. Dec 13 01:25:50.304393 systemd[1]: Running in initrd. Dec 13 01:25:50.304401 systemd[1]: No hostname configured, using default hostname. Dec 13 01:25:50.304408 systemd[1]: Hostname set to . Dec 13 01:25:50.304417 systemd[1]: Initializing machine ID from random generator. Dec 13 01:25:50.304424 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:25:50.304432 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:50.304440 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:50.304451 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:25:50.304459 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:25:50.304467 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:25:50.304481 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:25:50.304491 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:25:50.304500 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:25:50.304508 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:50.304518 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:50.304526 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:25:50.304534 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:25:50.304542 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:25:50.304549 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:25:50.304557 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:25:50.304565 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:25:50.304573 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:25:50.304582 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:25:50.304590 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:50.304598 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:25:50.304606 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:25:50.304614 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:25:50.304623 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:25:50.304631 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:25:50.304638 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:25:50.304646 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:25:50.304656 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:25:50.304670 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:25:50.304697 systemd-journald[217]: Collecting audit messages is disabled. Dec 13 01:25:50.304718 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:50.304729 systemd-journald[217]: Journal started Dec 13 01:25:50.304747 systemd-journald[217]: Runtime Journal (/run/log/journal/d4a46c9442554cf9b6ece953f35ec174) is 8.0M, max 78.5M, 70.5M free. Dec 13 01:25:50.305407 systemd-modules-load[218]: Inserted module 'overlay' Dec 13 01:25:50.324800 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:25:50.323605 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:25:50.340810 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:25:50.376301 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:25:50.376325 kernel: Bridge firewalling registered Dec 13 01:25:50.364239 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:25:50.375668 systemd-modules-load[218]: Inserted module 'br_netfilter' Dec 13 01:25:50.381272 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:25:50.392380 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:50.417694 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:50.432962 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:25:50.446167 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:25:50.460959 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:25:50.479360 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:25:50.488727 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:25:50.501004 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:50.516204 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:25:50.543030 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:25:50.558397 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:25:50.564932 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:25:50.587533 dracut-cmdline[250]: dracut-dracut-053 Dec 13 01:25:50.592474 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:25:50.623119 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:25:50.648416 systemd-resolved[259]: Positive Trust Anchors: Dec 13 01:25:50.648436 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:25:50.648468 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:25:50.650841 systemd-resolved[259]: Defaulting to hostname 'linux'. Dec 13 01:25:50.652930 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:25:50.667730 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:50.770813 kernel: SCSI subsystem initialized Dec 13 01:25:50.778800 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:25:50.788806 kernel: iscsi: registered transport (tcp) Dec 13 01:25:50.806201 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:25:50.806221 kernel: QLogic iSCSI HBA Driver Dec 13 01:25:50.845057 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:25:50.861999 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:25:50.893119 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:25:50.893192 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:25:50.898870 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:25:50.946807 kernel: raid6: neonx8 gen() 15754 MB/s Dec 13 01:25:50.966800 kernel: raid6: neonx4 gen() 15669 MB/s Dec 13 01:25:50.986791 kernel: raid6: neonx2 gen() 13261 MB/s Dec 13 01:25:51.007798 kernel: raid6: neonx1 gen() 10507 MB/s Dec 13 01:25:51.027790 kernel: raid6: int64x8 gen() 6979 MB/s Dec 13 01:25:51.047792 kernel: raid6: int64x4 gen() 7354 MB/s Dec 13 01:25:51.068793 kernel: raid6: int64x2 gen() 6133 MB/s Dec 13 01:25:51.092486 kernel: raid6: int64x1 gen() 5062 MB/s Dec 13 01:25:51.092506 kernel: raid6: using algorithm neonx8 gen() 15754 MB/s Dec 13 01:25:51.115902 kernel: raid6: .... xor() 11949 MB/s, rmw enabled Dec 13 01:25:51.115920 kernel: raid6: using neon recovery algorithm Dec 13 01:25:51.128906 kernel: xor: measuring software checksum speed Dec 13 01:25:51.128925 kernel: 8regs : 19764 MB/sec Dec 13 01:25:51.132666 kernel: 32regs : 19617 MB/sec Dec 13 01:25:51.136400 kernel: arm64_neon : 27061 MB/sec Dec 13 01:25:51.141031 kernel: xor: using function: arm64_neon (27061 MB/sec) Dec 13 01:25:51.191801 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:25:51.201349 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:25:51.217974 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:25:51.240119 systemd-udevd[437]: Using default interface naming scheme 'v255'. Dec 13 01:25:51.246684 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:25:51.264072 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:25:51.281834 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Dec 13 01:25:51.309039 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:25:51.325059 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:25:51.365060 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:51.389019 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:25:51.417062 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:25:51.428038 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:25:51.450601 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:51.473192 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:25:51.495939 kernel: hv_vmbus: Vmbus version:5.3 Dec 13 01:25:51.509243 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:25:51.509330 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 01:25:51.509345 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 01:25:51.509219 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:25:51.574253 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 13 01:25:51.574277 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 01:25:51.574991 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:25:51.575005 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 13 01:25:51.575016 kernel: PTP clock support registered Dec 13 01:25:51.575032 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 01:25:51.575295 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:25:51.595871 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 01:25:51.595893 kernel: scsi host0: storvsc_host_t Dec 13 01:25:51.597105 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:25:51.626300 kernel: scsi host1: storvsc_host_t Dec 13 01:25:51.626473 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 01:25:51.626495 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 01:25:51.597313 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:51.626428 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:51.681526 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 01:25:51.681558 kernel: hv_vmbus: registering driver hv_utils Dec 13 01:25:51.681568 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 01:25:51.681577 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 01:25:51.681586 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 01:25:51.632939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:51.633223 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:51.664743 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:51.768436 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 01:25:51.801174 kernel: hv_netvsc 000d3af6-5667-000d-3af6-5667000d3af6 eth0: VF slot 1 added Dec 13 01:25:51.801315 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:25:51.801327 kernel: hv_vmbus: registering driver hv_pci Dec 13 01:25:51.801338 kernel: hv_pci f1da8e40-1385-4cf9-9f2c-962569515642: PCI VMBus probing: Using version 0x10004 Dec 13 01:25:51.920995 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 01:25:51.921526 kernel: hv_pci f1da8e40-1385-4cf9-9f2c-962569515642: PCI host bridge to bus 1385:00 Dec 13 01:25:51.921642 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 01:25:51.921753 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 01:25:51.921838 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:25:51.921919 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 01:25:51.922000 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 01:25:51.922097 kernel: pci_bus 1385:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 13 01:25:51.922193 kernel: pci_bus 1385:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 01:25:51.922279 kernel: pci 1385:00:02.0: [15b3:1018] type 00 class 0x020000 Dec 13 01:25:51.922383 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:51.922393 kernel: pci 1385:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:25:51.922488 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:25:51.922572 kernel: pci 1385:00:02.0: enabling Extended Tags Dec 13 01:25:51.922675 kernel: pci 1385:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1385:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Dec 13 01:25:51.922763 kernel: pci_bus 1385:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 01:25:51.922838 kernel: pci 1385:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:25:51.732957 systemd-resolved[259]: Clock change detected. Flushing caches. Dec 13 01:25:51.768338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:51.790929 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:51.791060 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:51.833842 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:51.894737 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:51.982846 kernel: mlx5_core 1385:00:02.0: enabling device (0000 -> 0002) Dec 13 01:25:52.198361 kernel: mlx5_core 1385:00:02.0: firmware version: 16.30.1284 Dec 13 01:25:52.198508 kernel: hv_netvsc 000d3af6-5667-000d-3af6-5667000d3af6 eth0: VF registering: eth1 Dec 13 01:25:52.198600 kernel: mlx5_core 1385:00:02.0 eth1: joined to eth0 Dec 13 01:25:52.198692 kernel: mlx5_core 1385:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 13 01:25:51.923300 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:51.985733 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:52.219426 kernel: mlx5_core 1385:00:02.0 enP4997s1: renamed from eth1 Dec 13 01:25:52.268011 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 01:25:52.351075 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (490) Dec 13 01:25:52.368333 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:25:52.397977 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 01:25:52.415073 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (496) Dec 13 01:25:52.421699 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 01:25:52.428250 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 01:25:52.461370 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:25:52.487066 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:52.494067 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:53.503395 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:53.503450 disk-uuid[601]: The operation has completed successfully. Dec 13 01:25:53.556632 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:25:53.556730 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:25:53.589192 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:25:53.602100 sh[687]: Success Dec 13 01:25:53.632085 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:25:53.808464 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:25:53.814306 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:25:53.829187 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:25:53.862600 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:25:53.862663 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:53.869285 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:25:53.874257 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:25:53.878255 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:25:54.157770 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:25:54.163165 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:25:54.179318 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:25:54.187233 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:25:54.223526 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:54.223581 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:54.228328 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:54.254069 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:54.262319 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:25:54.274274 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:54.281323 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:25:54.293319 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:25:54.317640 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:25:54.335195 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:25:54.362076 systemd-networkd[871]: lo: Link UP Dec 13 01:25:54.362085 systemd-networkd[871]: lo: Gained carrier Dec 13 01:25:54.363691 systemd-networkd[871]: Enumeration completed Dec 13 01:25:54.363782 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:25:54.372678 systemd[1]: Reached target network.target - Network. Dec 13 01:25:54.376586 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:54.376589 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:25:54.454081 kernel: mlx5_core 1385:00:02.0 enP4997s1: Link up Dec 13 01:25:54.491070 kernel: hv_netvsc 000d3af6-5667-000d-3af6-5667000d3af6 eth0: Data path switched to VF: enP4997s1 Dec 13 01:25:54.491495 systemd-networkd[871]: enP4997s1: Link UP Dec 13 01:25:54.491624 systemd-networkd[871]: eth0: Link UP Dec 13 01:25:54.491728 systemd-networkd[871]: eth0: Gained carrier Dec 13 01:25:54.491736 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:54.499292 systemd-networkd[871]: enP4997s1: Gained carrier Dec 13 01:25:54.524100 systemd-networkd[871]: eth0: DHCPv4 address 10.200.20.42/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:25:55.193013 ignition[858]: Ignition 2.19.0 Dec 13 01:25:55.196399 ignition[858]: Stage: fetch-offline Dec 13 01:25:55.196454 ignition[858]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:55.198372 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:25:55.196464 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:55.214188 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:25:55.196558 ignition[858]: parsed url from cmdline: "" Dec 13 01:25:55.196561 ignition[858]: no config URL provided Dec 13 01:25:55.196566 ignition[858]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:25:55.196573 ignition[858]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:25:55.196579 ignition[858]: failed to fetch config: resource requires networking Dec 13 01:25:55.196752 ignition[858]: Ignition finished successfully Dec 13 01:25:55.229740 ignition[881]: Ignition 2.19.0 Dec 13 01:25:55.229746 ignition[881]: Stage: fetch Dec 13 01:25:55.229947 ignition[881]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:55.229960 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:55.234212 ignition[881]: parsed url from cmdline: "" Dec 13 01:25:55.234217 ignition[881]: no config URL provided Dec 13 01:25:55.234235 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:25:55.234250 ignition[881]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:25:55.234275 ignition[881]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 01:25:55.346129 ignition[881]: GET result: OK Dec 13 01:25:55.346220 ignition[881]: config has been read from IMDS userdata Dec 13 01:25:55.346294 ignition[881]: parsing config with SHA512: f5ea8ca8b5bfef601731b4d0a7fa48c6e2d109e43c3f32f453587cdb195ca4a7e90ad6240608dea8d8e066325634f5620b0799829d89c19e349675a23f76dd9c Dec 13 01:25:55.350669 unknown[881]: fetched base config from "system" Dec 13 01:25:55.351141 ignition[881]: fetch: fetch complete Dec 13 01:25:55.350677 unknown[881]: fetched base config from "system" Dec 13 01:25:55.351146 ignition[881]: fetch: fetch passed Dec 13 01:25:55.350682 unknown[881]: fetched user config from "azure" Dec 13 01:25:55.351192 ignition[881]: Ignition finished successfully Dec 13 01:25:55.357081 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:25:55.374294 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:25:55.390604 ignition[888]: Ignition 2.19.0 Dec 13 01:25:55.400119 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:25:55.390619 ignition[888]: Stage: kargs Dec 13 01:25:55.390862 ignition[888]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:55.390872 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:55.392312 ignition[888]: kargs: kargs passed Dec 13 01:25:55.392378 ignition[888]: Ignition finished successfully Dec 13 01:25:55.427354 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:25:55.448453 ignition[894]: Ignition 2.19.0 Dec 13 01:25:55.449089 ignition[894]: Stage: disks Dec 13 01:25:55.449278 ignition[894]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:55.454632 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:25:55.449288 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:55.462385 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:25:55.450306 ignition[894]: disks: disks passed Dec 13 01:25:55.472298 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:25:55.450352 ignition[894]: Ignition finished successfully Dec 13 01:25:55.483682 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:25:55.494378 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:25:55.505233 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:25:55.529304 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:25:55.589221 systemd-fsck[902]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 01:25:55.597136 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:25:55.617269 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:25:55.671068 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:25:55.672315 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:25:55.676745 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:25:55.716120 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:25:55.725365 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:25:55.743449 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:25:55.756728 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (913) Dec 13 01:25:55.751633 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:25:55.791075 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:55.791097 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:55.791108 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:55.751673 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:25:55.770837 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:25:55.813066 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:55.813339 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:25:55.820085 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:25:56.080377 systemd-networkd[871]: enP4997s1: Gained IPv6LL Dec 13 01:25:56.144372 systemd-networkd[871]: eth0: Gained IPv6LL Dec 13 01:25:56.255749 coreos-metadata[915]: Dec 13 01:25:56.255 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:25:56.265898 coreos-metadata[915]: Dec 13 01:25:56.265 INFO Fetch successful Dec 13 01:25:56.265898 coreos-metadata[915]: Dec 13 01:25:56.265 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:25:56.294295 coreos-metadata[915]: Dec 13 01:25:56.294 INFO Fetch successful Dec 13 01:25:56.299971 coreos-metadata[915]: Dec 13 01:25:56.296 INFO wrote hostname ci-4081.2.1-a-16a3da9678 to /sysroot/etc/hostname Dec 13 01:25:56.300365 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:25:56.431423 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:25:56.440220 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:25:56.448971 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:25:56.470826 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:25:57.091884 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:25:57.107275 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:25:57.116234 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:25:57.131561 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:25:57.141717 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:57.160210 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:25:57.177191 ignition[1031]: INFO : Ignition 2.19.0 Dec 13 01:25:57.177191 ignition[1031]: INFO : Stage: mount Dec 13 01:25:57.185251 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:57.185251 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:57.185251 ignition[1031]: INFO : mount: mount passed Dec 13 01:25:57.185251 ignition[1031]: INFO : Ignition finished successfully Dec 13 01:25:57.182765 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:25:57.208246 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:25:57.227288 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:25:57.262211 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1043) Dec 13 01:25:57.262274 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:57.267960 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:57.272101 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:57.278062 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:57.279992 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:25:57.306056 ignition[1061]: INFO : Ignition 2.19.0 Dec 13 01:25:57.306056 ignition[1061]: INFO : Stage: files Dec 13 01:25:57.313477 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:57.313477 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:57.313477 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:25:57.337056 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:25:57.337056 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:25:57.389693 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:25:57.397106 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:25:57.397106 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:25:57.390141 unknown[1061]: wrote ssh authorized keys file for user: core Dec 13 01:25:57.415851 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:25:57.415851 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:25:57.415851 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:25:57.415851 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:25:57.588969 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:25:57.851694 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:25:57.851694 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:25:57.871827 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 01:25:58.297685 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Dec 13 01:25:58.367695 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:58.377862 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 01:25:58.791407 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Dec 13 01:25:59.021725 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:59.021725 ignition[1061]: INFO : files: op(d): [started] processing unit "containerd.service" Dec 13 01:25:59.045803 ignition[1061]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: op(d): [finished] processing unit "containerd.service" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:25:59.062029 ignition[1061]: INFO : files: files passed Dec 13 01:25:59.062029 ignition[1061]: INFO : Ignition finished successfully Dec 13 01:25:59.061719 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:25:59.110359 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:25:59.129237 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:25:59.150081 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:25:59.244597 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:59.244597 initrd-setup-root-after-ignition[1087]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:59.150174 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:25:59.267717 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:59.177530 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:25:59.192489 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:25:59.213275 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:25:59.264280 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:25:59.266076 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:25:59.275139 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:25:59.289564 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:25:59.303138 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:25:59.322306 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:25:59.367606 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:25:59.384323 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:25:59.404636 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:59.416229 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:59.423279 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:25:59.433789 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:25:59.433967 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:25:59.449372 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:25:59.461644 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:25:59.472193 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:25:59.483028 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:25:59.501965 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:25:59.514387 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:25:59.525493 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:25:59.532565 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:25:59.544915 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:25:59.560228 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:25:59.574536 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:25:59.574706 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:25:59.588734 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:59.594785 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:59.606135 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:25:59.606247 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:59.619034 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:25:59.619228 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:25:59.635334 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:25:59.635504 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:25:59.649193 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:25:59.649355 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:25:59.659306 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:25:59.659469 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:25:59.696182 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:25:59.731092 ignition[1112]: INFO : Ignition 2.19.0 Dec 13 01:25:59.731092 ignition[1112]: INFO : Stage: umount Dec 13 01:25:59.731092 ignition[1112]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:59.731092 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:59.731092 ignition[1112]: INFO : umount: umount passed Dec 13 01:25:59.731092 ignition[1112]: INFO : Ignition finished successfully Dec 13 01:25:59.713210 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:25:59.724457 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:25:59.724630 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:59.735209 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:25:59.735328 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:25:59.751443 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:25:59.752067 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:25:59.764860 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:25:59.764960 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:25:59.774624 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:25:59.774677 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:25:59.780203 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:25:59.780242 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:25:59.789915 systemd[1]: Stopped target network.target - Network. Dec 13 01:25:59.799230 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:25:59.799282 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:25:59.810721 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:25:59.815652 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:25:59.815721 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:59.830210 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:25:59.840356 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:25:59.852778 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:25:59.852831 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:25:59.863498 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:25:59.863546 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:25:59.874415 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:25:59.874468 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:25:59.884897 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:25:59.884941 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:25:59.896351 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:25:59.906329 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:25:59.927116 systemd-networkd[871]: eth0: DHCPv6 lease lost Dec 13 01:25:59.927734 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:25:59.928546 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:25:59.928637 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:25:59.941136 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:25:59.941255 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:26:00.176071 kernel: hv_netvsc 000d3af6-5667-000d-3af6-5667000d3af6 eth0: Data path switched from VF: enP4997s1 Dec 13 01:25:59.953814 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:25:59.953979 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:25:59.964634 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:25:59.964840 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:25:59.978035 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:25:59.978116 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:59.986315 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:25:59.986383 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:26:00.012232 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:26:00.021342 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:26:00.021414 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:26:00.033341 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:26:00.033401 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:00.043794 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:26:00.043840 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:00.054619 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:26:00.054667 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:00.066695 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:00.103227 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:26:00.103370 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:00.116142 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:26:00.116189 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:00.125401 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:26:00.125431 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:00.135598 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:26:00.135650 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:26:00.159935 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:26:00.160006 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:26:00.176107 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:26:00.176192 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:00.205301 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:26:00.220789 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:26:00.220867 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:00.233895 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:26:00.233950 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:00.246383 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:26:00.246435 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:00.463083 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Dec 13 01:26:00.258397 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:00.258446 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:00.269770 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:26:00.269857 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:26:00.279904 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:26:00.279981 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:26:00.298325 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:26:00.325290 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:26:00.399578 systemd[1]: Switching root. Dec 13 01:26:00.511843 systemd-journald[217]: Journal stopped Dec 13 01:26:04.728556 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:26:04.728580 kernel: SELinux: policy capability open_perms=1 Dec 13 01:26:04.728590 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:26:04.728598 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:26:04.728608 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:26:04.728616 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:26:04.728624 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:26:04.728632 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:26:04.728640 kernel: audit: type=1403 audit(1734053162.322:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:26:04.728650 systemd[1]: Successfully loaded SELinux policy in 119.111ms. Dec 13 01:26:04.728661 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.113ms. Dec 13 01:26:04.728672 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:26:04.728681 systemd[1]: Detected virtualization microsoft. Dec 13 01:26:04.728690 systemd[1]: Detected architecture arm64. Dec 13 01:26:04.728700 systemd[1]: Detected first boot. Dec 13 01:26:04.728711 systemd[1]: Hostname set to . Dec 13 01:26:04.728721 systemd[1]: Initializing machine ID from random generator. Dec 13 01:26:04.728730 zram_generator::config[1171]: No configuration found. Dec 13 01:26:04.728741 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:26:04.728750 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:26:04.728759 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:26:04.728769 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:26:04.728780 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:26:04.728789 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:26:04.728799 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:26:04.728808 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:26:04.728818 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:26:04.728827 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:26:04.728836 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:26:04.728847 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:04.728856 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:04.728866 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:26:04.728875 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:26:04.728884 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:26:04.728894 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:26:04.728903 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 01:26:04.728912 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:04.728922 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:26:04.728932 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:04.728942 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:26:04.728954 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:26:04.728964 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:26:04.728974 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:26:04.728983 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:26:04.728993 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:26:04.729004 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:26:04.729014 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:04.729024 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:04.729033 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:04.729042 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:26:04.729069 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:26:04.729080 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:26:04.729089 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:26:04.729098 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:26:04.729108 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:26:04.729117 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:26:04.729127 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:26:04.729136 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:04.729147 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:26:04.729157 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:26:04.729168 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:04.729178 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:26:04.729188 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:04.729197 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:26:04.729206 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:04.729217 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:26:04.729228 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:26:04.729238 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:26:04.729247 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:26:04.729256 kernel: fuse: init (API version 7.39) Dec 13 01:26:04.729265 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:26:04.729274 kernel: loop: module loaded Dec 13 01:26:04.729283 kernel: ACPI: bus type drm_connector registered Dec 13 01:26:04.729292 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:26:04.729319 systemd-journald[1274]: Collecting audit messages is disabled. Dec 13 01:26:04.729342 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:26:04.729352 systemd-journald[1274]: Journal started Dec 13 01:26:04.729374 systemd-journald[1274]: Runtime Journal (/run/log/journal/64192cc00c45438eb356d5403523e29d) is 8.0M, max 78.5M, 70.5M free. Dec 13 01:26:04.761281 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:26:04.775579 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:26:04.776839 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:26:04.783138 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:26:04.789610 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:26:04.795391 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:26:04.801708 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:26:04.808702 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:26:04.814760 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:26:04.821996 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:04.829693 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:26:04.829944 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:26:04.837616 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:04.837857 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:04.844623 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:26:04.844776 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:26:04.851402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:04.851554 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:04.858792 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:26:04.858940 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:26:04.865618 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:04.865811 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:04.872432 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:04.879401 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:26:04.887392 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:26:04.894883 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:04.910137 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:26:04.927218 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:26:04.934775 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:26:04.941151 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:26:04.945242 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:26:04.952804 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:26:04.959784 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:26:04.961010 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:26:04.967606 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:26:04.968943 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:26:04.978282 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:26:04.999279 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:26:05.010328 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:26:05.017347 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:26:05.025038 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:26:05.043221 systemd-journald[1274]: Time spent on flushing to /var/log/journal/64192cc00c45438eb356d5403523e29d is 16.668ms for 894 entries. Dec 13 01:26:05.043221 systemd-journald[1274]: System Journal (/var/log/journal/64192cc00c45438eb356d5403523e29d) is 8.0M, max 2.6G, 2.6G free. Dec 13 01:26:05.092340 systemd-journald[1274]: Received client request to flush runtime journal. Dec 13 01:26:05.038631 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:26:05.052802 udevadm[1331]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:26:05.096897 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:26:05.117289 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Dec 13 01:26:05.117308 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Dec 13 01:26:05.123102 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:05.129993 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:05.144237 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:26:05.237765 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:26:05.249239 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:26:05.265805 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Dec 13 01:26:05.266095 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Dec 13 01:26:05.271400 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:06.008198 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:26:06.022189 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:06.047320 systemd-udevd[1355]: Using default interface naming scheme 'v255'. Dec 13 01:26:06.130879 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:06.153460 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:26:06.197606 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Dec 13 01:26:06.241221 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1373) Dec 13 01:26:06.244679 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:26:06.259069 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1373) Dec 13 01:26:06.317737 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:26:06.317843 kernel: hv_vmbus: registering driver hv_balloon Dec 13 01:26:06.317862 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 01:26:06.325230 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 13 01:26:06.326029 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:26:06.348079 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 01:26:06.360438 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 01:26:06.360539 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 01:26:06.364928 kernel: Console: switching to colour dummy device 80x25 Dec 13 01:26:06.372641 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:26:06.392189 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:06.409070 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1366) Dec 13 01:26:06.424786 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:06.425448 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:06.489135 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:26:06.504069 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:06.515432 systemd-networkd[1370]: lo: Link UP Dec 13 01:26:06.515752 systemd-networkd[1370]: lo: Gained carrier Dec 13 01:26:06.517914 systemd-networkd[1370]: Enumeration completed Dec 13 01:26:06.518386 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:06.518391 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:26:06.519147 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:26:06.532204 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:26:06.579069 kernel: mlx5_core 1385:00:02.0 enP4997s1: Link up Dec 13 01:26:06.604064 kernel: hv_netvsc 000d3af6-5667-000d-3af6-5667000d3af6 eth0: Data path switched to VF: enP4997s1 Dec 13 01:26:06.604601 systemd-networkd[1370]: enP4997s1: Link UP Dec 13 01:26:06.604691 systemd-networkd[1370]: eth0: Link UP Dec 13 01:26:06.604694 systemd-networkd[1370]: eth0: Gained carrier Dec 13 01:26:06.604708 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:06.609335 systemd-networkd[1370]: enP4997s1: Gained carrier Dec 13 01:26:06.613694 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:26:06.622243 systemd-networkd[1370]: eth0: DHCPv4 address 10.200.20.42/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:26:06.633265 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:26:06.694891 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:26:06.728602 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:26:06.735657 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:06.746195 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:26:06.756268 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:26:06.782655 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:26:06.790500 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:26:06.797871 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:26:06.798271 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:26:06.804600 systemd[1]: Reached target machines.target - Containers. Dec 13 01:26:06.813176 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:26:06.824187 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:26:06.831979 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:26:06.837640 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:06.838933 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:26:06.855227 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:26:06.865753 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:26:06.873671 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:26:06.882579 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:06.893932 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:26:06.920220 kernel: loop0: detected capacity change from 0 to 114328 Dec 13 01:26:06.944809 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:26:06.945565 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:26:07.271189 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:26:07.297118 kernel: loop1: detected capacity change from 0 to 194512 Dec 13 01:26:07.341076 kernel: loop2: detected capacity change from 0 to 31320 Dec 13 01:26:07.643081 kernel: loop3: detected capacity change from 0 to 114432 Dec 13 01:26:07.728172 systemd-networkd[1370]: eth0: Gained IPv6LL Dec 13 01:26:07.729877 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:26:07.946091 kernel: loop4: detected capacity change from 0 to 114328 Dec 13 01:26:07.954072 kernel: loop5: detected capacity change from 0 to 194512 Dec 13 01:26:07.961078 kernel: loop6: detected capacity change from 0 to 31320 Dec 13 01:26:07.968079 kernel: loop7: detected capacity change from 0 to 114432 Dec 13 01:26:07.970974 (sd-merge)[1476]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 13 01:26:07.972083 (sd-merge)[1476]: Merged extensions into '/usr'. Dec 13 01:26:07.975117 systemd[1]: Reloading requested from client PID 1458 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:26:07.975135 systemd[1]: Reloading... Dec 13 01:26:08.037205 zram_generator::config[1509]: No configuration found. Dec 13 01:26:08.159495 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:08.227898 systemd[1]: Reloading finished in 252 ms. Dec 13 01:26:08.243931 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:26:08.264262 systemd[1]: Starting ensure-sysext.service... Dec 13 01:26:08.269604 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:26:08.279294 systemd[1]: Reloading requested from client PID 1564 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:26:08.279313 systemd[1]: Reloading... Dec 13 01:26:08.293006 systemd-tmpfiles[1565]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:26:08.293602 systemd-tmpfiles[1565]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:26:08.297248 systemd-tmpfiles[1565]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:26:08.298033 systemd-tmpfiles[1565]: ACLs are not supported, ignoring. Dec 13 01:26:08.298448 systemd-tmpfiles[1565]: ACLs are not supported, ignoring. Dec 13 01:26:08.317592 systemd-tmpfiles[1565]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:26:08.317737 systemd-tmpfiles[1565]: Skipping /boot Dec 13 01:26:08.326143 systemd-tmpfiles[1565]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:26:08.327177 systemd-tmpfiles[1565]: Skipping /boot Dec 13 01:26:08.356518 zram_generator::config[1590]: No configuration found. Dec 13 01:26:08.433151 systemd-networkd[1370]: enP4997s1: Gained IPv6LL Dec 13 01:26:08.472416 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:08.541430 systemd[1]: Reloading finished in 261 ms. Dec 13 01:26:08.555985 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:08.570915 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:26:08.591867 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:26:08.599895 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:26:08.608221 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:26:08.626186 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:26:08.642407 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:08.649433 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:08.659366 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:08.678619 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:08.691509 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:08.692609 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:08.692782 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:08.703331 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:26:08.711319 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:08.711490 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:08.718844 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:08.719106 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:08.726216 augenrules[1686]: No rules Dec 13 01:26:08.726729 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:26:08.736699 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:26:08.749813 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:08.757345 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:08.760926 systemd-resolved[1668]: Positive Trust Anchors: Dec 13 01:26:08.760938 systemd-resolved[1668]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:26:08.760971 systemd-resolved[1668]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:26:08.774060 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:08.782005 systemd-resolved[1668]: Using system hostname 'ci-4081.2.1-a-16a3da9678'. Dec 13 01:26:08.784934 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:08.790494 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:08.791476 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:26:08.798669 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:08.798949 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:08.805752 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:08.806008 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:08.813541 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:08.813858 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:08.824289 systemd[1]: Reached target network.target - Network. Dec 13 01:26:08.829363 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:26:08.835409 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:08.842003 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:08.853406 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:08.860531 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:26:08.868234 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:08.876298 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:08.882951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:08.883319 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:26:08.897695 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:08.897856 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:08.904440 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:26:08.904596 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:26:08.911517 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:08.911675 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:08.918637 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:08.920000 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:08.930888 systemd[1]: Finished ensure-sysext.service. Dec 13 01:26:08.937652 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:26:08.937867 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:26:09.551963 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:26:09.560133 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:26:11.353006 ldconfig[1453]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:26:11.363520 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:26:11.374268 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:26:11.388660 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:26:11.394803 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:26:11.400585 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:26:11.407304 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:26:11.414308 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:26:11.419897 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:26:11.426416 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:26:11.433293 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:26:11.433338 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:26:11.438218 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:26:11.457513 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:26:11.465148 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:26:11.471119 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:26:11.479169 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:26:11.484937 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:26:11.490036 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:26:11.495141 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:26:11.495184 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:26:11.495208 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:26:11.497476 systemd[1]: Starting chronyd.service - NTP client/server... Dec 13 01:26:11.505187 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:26:11.514230 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:26:11.525419 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:26:11.536259 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:26:11.542810 (chronyd)[1734]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Dec 13 01:26:11.545250 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:26:11.552582 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:26:11.554140 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Dec 13 01:26:11.555261 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 13 01:26:11.561896 jq[1741]: false Dec 13 01:26:11.563175 KVP[1743]: KVP starting; pid is:1743 Dec 13 01:26:11.564358 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 13 01:26:11.568027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:11.574610 chronyd[1748]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Dec 13 01:26:11.582282 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:26:11.588534 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:26:11.598137 chronyd[1748]: Timezone right/UTC failed leap second check, ignoring Dec 13 01:26:11.598333 chronyd[1748]: Loaded seccomp filter (level 2) Dec 13 01:26:11.603176 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:26:11.605742 KVP[1743]: KVP LIC Version: 3.1 Dec 13 01:26:11.611353 kernel: hv_utils: KVP IC version 4.0 Dec 13 01:26:11.612075 extend-filesystems[1742]: Found loop4 Dec 13 01:26:11.612075 extend-filesystems[1742]: Found loop5 Dec 13 01:26:11.612075 extend-filesystems[1742]: Found loop6 Dec 13 01:26:11.612075 extend-filesystems[1742]: Found loop7 Dec 13 01:26:11.612075 extend-filesystems[1742]: Found sda Dec 13 01:26:11.612075 extend-filesystems[1742]: Found sda1 Dec 13 01:26:11.674156 extend-filesystems[1742]: Found sda2 Dec 13 01:26:11.674156 extend-filesystems[1742]: Found sda3 Dec 13 01:26:11.674156 extend-filesystems[1742]: Found usr Dec 13 01:26:11.674156 extend-filesystems[1742]: Found sda4 Dec 13 01:26:11.674156 extend-filesystems[1742]: Found sda6 Dec 13 01:26:11.674156 extend-filesystems[1742]: Found sda7 Dec 13 01:26:11.674156 extend-filesystems[1742]: Found sda9 Dec 13 01:26:11.674156 extend-filesystems[1742]: Checking size of /dev/sda9 Dec 13 01:26:11.865243 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1794) Dec 13 01:26:11.629490 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:26:11.643339 dbus-daemon[1740]: [system] SELinux support is enabled Dec 13 01:26:11.867088 coreos-metadata[1736]: Dec 13 01:26:11.748 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:26:11.867088 coreos-metadata[1736]: Dec 13 01:26:11.761 INFO Fetch successful Dec 13 01:26:11.867088 coreos-metadata[1736]: Dec 13 01:26:11.761 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 13 01:26:11.867088 coreos-metadata[1736]: Dec 13 01:26:11.770 INFO Fetch successful Dec 13 01:26:11.867088 coreos-metadata[1736]: Dec 13 01:26:11.770 INFO Fetching http://168.63.129.16/machine/6ae7b581-3030-4536-8612-ea52e28d3374/50071e5a%2D7cb6%2D4db2%2D9107%2D6293485ee706.%5Fci%2D4081.2.1%2Da%2D16a3da9678?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 13 01:26:11.867088 coreos-metadata[1736]: Dec 13 01:26:11.776 INFO Fetch successful Dec 13 01:26:11.867088 coreos-metadata[1736]: Dec 13 01:26:11.776 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:26:11.867088 coreos-metadata[1736]: Dec 13 01:26:11.793 INFO Fetch successful Dec 13 01:26:11.867321 extend-filesystems[1742]: Old size kept for /dev/sda9 Dec 13 01:26:11.867321 extend-filesystems[1742]: Found sr0 Dec 13 01:26:11.647740 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:26:11.672205 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:26:11.686806 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:26:11.692232 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:26:11.714950 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:26:11.896970 update_engine[1779]: I20241213 01:26:11.777432 1779 main.cc:92] Flatcar Update Engine starting Dec 13 01:26:11.896970 update_engine[1779]: I20241213 01:26:11.780615 1779 update_check_scheduler.cc:74] Next update check in 4m22s Dec 13 01:26:11.741417 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:26:11.921038 jq[1782]: true Dec 13 01:26:11.754935 systemd[1]: Started chronyd.service - NTP client/server. Dec 13 01:26:11.771492 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:26:11.771741 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:26:11.771985 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:26:11.779041 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:26:11.799574 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:26:11.799814 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:26:11.850550 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:26:11.858920 systemd-logind[1774]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 13 01:26:11.867691 systemd-logind[1774]: New seat seat0. Dec 13 01:26:11.913409 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:26:11.924980 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:26:11.925233 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:26:11.952596 (ntainerd)[1829]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:26:11.955446 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:26:11.977141 jq[1827]: true Dec 13 01:26:12.001580 dbus-daemon[1740]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:26:12.010253 tar[1816]: linux-arm64/helm Dec 13 01:26:12.023296 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:26:12.036595 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:26:12.037293 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:26:12.037419 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:26:12.048266 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:26:12.048378 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:26:12.059038 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:26:12.069339 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:26:12.118454 bash[1863]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:26:12.120759 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:26:12.133652 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:26:12.258313 locksmithd[1864]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:26:12.532506 tar[1816]: linux-arm64/LICENSE Dec 13 01:26:12.532711 tar[1816]: linux-arm64/README.md Dec 13 01:26:12.548085 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:26:12.665554 containerd[1829]: time="2024-12-13T01:26:12.665460160Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:26:12.688587 sshd_keygen[1781]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:26:12.700715 containerd[1829]: time="2024-12-13T01:26:12.700652680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:12.702092 containerd[1829]: time="2024-12-13T01:26:12.702037840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:12.702226 containerd[1829]: time="2024-12-13T01:26:12.702210920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:26:12.702286 containerd[1829]: time="2024-12-13T01:26:12.702274040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:26:12.702486 containerd[1829]: time="2024-12-13T01:26:12.702469600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:26:12.702556 containerd[1829]: time="2024-12-13T01:26:12.702545400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:12.702673 containerd[1829]: time="2024-12-13T01:26:12.702656200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:12.702737 containerd[1829]: time="2024-12-13T01:26:12.702724520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:12.703008 containerd[1829]: time="2024-12-13T01:26:12.702987120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:12.703100 containerd[1829]: time="2024-12-13T01:26:12.703086240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:12.703155 containerd[1829]: time="2024-12-13T01:26:12.703142080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:12.703201 containerd[1829]: time="2024-12-13T01:26:12.703189080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:12.703343 containerd[1829]: time="2024-12-13T01:26:12.703327480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:12.703599 containerd[1829]: time="2024-12-13T01:26:12.703581800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:12.703824 containerd[1829]: time="2024-12-13T01:26:12.703803680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:12.703886 containerd[1829]: time="2024-12-13T01:26:12.703874480Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:26:12.704018 containerd[1829]: time="2024-12-13T01:26:12.704002800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:26:12.704243 containerd[1829]: time="2024-12-13T01:26:12.704141960Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:26:12.721651 containerd[1829]: time="2024-12-13T01:26:12.721617240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:26:12.721865 containerd[1829]: time="2024-12-13T01:26:12.721769840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:26:12.721865 containerd[1829]: time="2024-12-13T01:26:12.721792560Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:26:12.723144 containerd[1829]: time="2024-12-13T01:26:12.721935160Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:26:12.723144 containerd[1829]: time="2024-12-13T01:26:12.721966120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:26:12.731559 containerd[1829]: time="2024-12-13T01:26:12.723332600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:26:12.731559 containerd[1829]: time="2024-12-13T01:26:12.723642080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:26:12.731559 containerd[1829]: time="2024-12-13T01:26:12.723763000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:26:12.731559 containerd[1829]: time="2024-12-13T01:26:12.723785120Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:26:12.731559 containerd[1829]: time="2024-12-13T01:26:12.723798600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:26:12.731559 containerd[1829]: time="2024-12-13T01:26:12.723812520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:26:12.731559 containerd[1829]: time="2024-12-13T01:26:12.723827240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:26:12.731559 containerd[1829]: time="2024-12-13T01:26:12.723840960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:26:12.731559 containerd[1829]: time="2024-12-13T01:26:12.723856560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:26:12.731559 containerd[1829]: time="2024-12-13T01:26:12.723872280Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:26:12.731559 containerd[1829]: time="2024-12-13T01:26:12.723886920Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:26:12.731559 containerd[1829]: time="2024-12-13T01:26:12.723901840Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:26:12.731559 containerd[1829]: time="2024-12-13T01:26:12.723914640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:26:12.731559 containerd[1829]: time="2024-12-13T01:26:12.723935240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:26:12.725470 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:26:12.732000 containerd[1829]: time="2024-12-13T01:26:12.723949960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:26:12.732000 containerd[1829]: time="2024-12-13T01:26:12.723963000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:26:12.732000 containerd[1829]: time="2024-12-13T01:26:12.723981160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:26:12.732000 containerd[1829]: time="2024-12-13T01:26:12.723993320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:26:12.732000 containerd[1829]: time="2024-12-13T01:26:12.724006440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:26:12.732000 containerd[1829]: time="2024-12-13T01:26:12.724017880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:26:12.732000 containerd[1829]: time="2024-12-13T01:26:12.724034280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:26:12.732000 containerd[1829]: time="2024-12-13T01:26:12.724071560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:26:12.732000 containerd[1829]: time="2024-12-13T01:26:12.724096400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:26:12.732000 containerd[1829]: time="2024-12-13T01:26:12.724108280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:26:12.732000 containerd[1829]: time="2024-12-13T01:26:12.724120760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:26:12.732000 containerd[1829]: time="2024-12-13T01:26:12.724133680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:26:12.732000 containerd[1829]: time="2024-12-13T01:26:12.724148760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:26:12.732000 containerd[1829]: time="2024-12-13T01:26:12.724168120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:26:12.732000 containerd[1829]: time="2024-12-13T01:26:12.724181120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:26:12.732381 containerd[1829]: time="2024-12-13T01:26:12.724191840Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:26:12.732381 containerd[1829]: time="2024-12-13T01:26:12.724241080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:26:12.732381 containerd[1829]: time="2024-12-13T01:26:12.724259760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:26:12.732381 containerd[1829]: time="2024-12-13T01:26:12.724270320Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:26:12.732381 containerd[1829]: time="2024-12-13T01:26:12.724281640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:26:12.732381 containerd[1829]: time="2024-12-13T01:26:12.724290920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:26:12.732381 containerd[1829]: time="2024-12-13T01:26:12.724302400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:26:12.732381 containerd[1829]: time="2024-12-13T01:26:12.724312400Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:26:12.732381 containerd[1829]: time="2024-12-13T01:26:12.724322960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:26:12.732567 containerd[1829]: time="2024-12-13T01:26:12.724613840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:26:12.732567 containerd[1829]: time="2024-12-13T01:26:12.724669240Z" level=info msg="Connect containerd service" Dec 13 01:26:12.732567 containerd[1829]: time="2024-12-13T01:26:12.724705760Z" level=info msg="using legacy CRI server" Dec 13 01:26:12.732567 containerd[1829]: time="2024-12-13T01:26:12.724712080Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:26:12.732567 containerd[1829]: time="2024-12-13T01:26:12.724799440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:26:12.732567 containerd[1829]: time="2024-12-13T01:26:12.728695120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:26:12.732567 containerd[1829]: time="2024-12-13T01:26:12.728979240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:26:12.732567 containerd[1829]: time="2024-12-13T01:26:12.729013560Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:26:12.732567 containerd[1829]: time="2024-12-13T01:26:12.729118800Z" level=info msg="Start subscribing containerd event" Dec 13 01:26:12.732567 containerd[1829]: time="2024-12-13T01:26:12.729158840Z" level=info msg="Start recovering state" Dec 13 01:26:12.732567 containerd[1829]: time="2024-12-13T01:26:12.729220600Z" level=info msg="Start event monitor" Dec 13 01:26:12.732567 containerd[1829]: time="2024-12-13T01:26:12.729231200Z" level=info msg="Start snapshots syncer" Dec 13 01:26:12.732567 containerd[1829]: time="2024-12-13T01:26:12.729239640Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:26:12.732567 containerd[1829]: time="2024-12-13T01:26:12.729248720Z" level=info msg="Start streaming server" Dec 13 01:26:12.732567 containerd[1829]: time="2024-12-13T01:26:12.729299840Z" level=info msg="containerd successfully booted in 0.067457s" Dec 13 01:26:12.736899 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:26:12.754362 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:26:12.761335 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 13 01:26:12.772223 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:12.780285 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:26:12.780610 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:26:12.787426 (kubelet)[1908]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:12.801414 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:26:12.812068 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 13 01:26:12.831006 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:26:12.842358 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:26:12.856384 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 01:26:12.863798 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:26:12.869271 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:26:12.876517 systemd[1]: Startup finished in 13.067s (kernel) + 10.671s (userspace) = 23.739s. Dec 13 01:26:13.167397 login[1922]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:13.172210 login[1923]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:13.181421 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:26:13.188306 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:26:13.193093 systemd-logind[1774]: New session 1 of user core. Dec 13 01:26:13.201182 systemd-logind[1774]: New session 2 of user core. Dec 13 01:26:13.207869 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:26:13.219920 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:26:13.227146 (systemd)[1937]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:26:13.298401 kubelet[1908]: E1213 01:26:13.298308 1908 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:13.303309 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:13.303466 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:13.356621 systemd[1937]: Queued start job for default target default.target. Dec 13 01:26:13.356957 systemd[1937]: Created slice app.slice - User Application Slice. Dec 13 01:26:13.356983 systemd[1937]: Reached target paths.target - Paths. Dec 13 01:26:13.356993 systemd[1937]: Reached target timers.target - Timers. Dec 13 01:26:13.370146 systemd[1937]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:26:13.378726 systemd[1937]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:26:13.378901 systemd[1937]: Reached target sockets.target - Sockets. Dec 13 01:26:13.379008 systemd[1937]: Reached target basic.target - Basic System. Dec 13 01:26:13.379132 systemd[1937]: Reached target default.target - Main User Target. Dec 13 01:26:13.379224 systemd[1937]: Startup finished in 145ms. Dec 13 01:26:13.379401 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:26:13.391439 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:26:13.394064 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:26:14.419402 waagent[1918]: 2024-12-13T01:26:14.419311Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Dec 13 01:26:14.425071 waagent[1918]: 2024-12-13T01:26:14.424993Z INFO Daemon Daemon OS: flatcar 4081.2.1 Dec 13 01:26:14.429388 waagent[1918]: 2024-12-13T01:26:14.429331Z INFO Daemon Daemon Python: 3.11.9 Dec 13 01:26:14.435087 waagent[1918]: 2024-12-13T01:26:14.434122Z INFO Daemon Daemon Run daemon Dec 13 01:26:14.438269 waagent[1918]: 2024-12-13T01:26:14.438221Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.2.1' Dec 13 01:26:14.447028 waagent[1918]: 2024-12-13T01:26:14.446962Z INFO Daemon Daemon Using waagent for provisioning Dec 13 01:26:14.451992 waagent[1918]: 2024-12-13T01:26:14.451944Z INFO Daemon Daemon Activate resource disk Dec 13 01:26:14.456531 waagent[1918]: 2024-12-13T01:26:14.456480Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 01:26:14.467060 waagent[1918]: 2024-12-13T01:26:14.467001Z INFO Daemon Daemon Found device: None Dec 13 01:26:14.471263 waagent[1918]: 2024-12-13T01:26:14.471219Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 01:26:14.479315 waagent[1918]: 2024-12-13T01:26:14.479265Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 01:26:14.491646 waagent[1918]: 2024-12-13T01:26:14.491586Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:26:14.497584 waagent[1918]: 2024-12-13T01:26:14.497535Z INFO Daemon Daemon Running default provisioning handler Dec 13 01:26:14.508908 waagent[1918]: 2024-12-13T01:26:14.508825Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 13 01:26:14.522261 waagent[1918]: 2024-12-13T01:26:14.522191Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 01:26:14.531490 waagent[1918]: 2024-12-13T01:26:14.531429Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 01:26:14.536373 waagent[1918]: 2024-12-13T01:26:14.536318Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 01:26:14.627066 waagent[1918]: 2024-12-13T01:26:14.623873Z INFO Daemon Daemon Successfully mounted dvd Dec 13 01:26:14.639853 waagent[1918]: 2024-12-13T01:26:14.639133Z INFO Daemon Daemon Detect protocol endpoint Dec 13 01:26:14.639217 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 01:26:14.643957 waagent[1918]: 2024-12-13T01:26:14.643894Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:26:14.649815 waagent[1918]: 2024-12-13T01:26:14.649755Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 01:26:14.655997 waagent[1918]: 2024-12-13T01:26:14.655948Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 01:26:14.661244 waagent[1918]: 2024-12-13T01:26:14.661194Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 01:26:14.666081 waagent[1918]: 2024-12-13T01:26:14.666021Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 01:26:14.709359 waagent[1918]: 2024-12-13T01:26:14.709266Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 01:26:14.715666 waagent[1918]: 2024-12-13T01:26:14.715637Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 01:26:14.720671 waagent[1918]: 2024-12-13T01:26:14.720623Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 01:26:14.976568 waagent[1918]: 2024-12-13T01:26:14.976416Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 01:26:14.982598 waagent[1918]: 2024-12-13T01:26:14.982535Z INFO Daemon Daemon Forcing an update of the goal state. Dec 13 01:26:14.991319 waagent[1918]: 2024-12-13T01:26:14.991269Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 01:26:15.011522 waagent[1918]: 2024-12-13T01:26:15.011475Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Dec 13 01:26:15.016712 waagent[1918]: 2024-12-13T01:26:15.016665Z INFO Daemon Dec 13 01:26:15.019174 waagent[1918]: 2024-12-13T01:26:15.019130Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: a3a5c300-5f55-466e-a901-73641952d098 eTag: 9867055094232432487 source: Fabric] Dec 13 01:26:15.029813 waagent[1918]: 2024-12-13T01:26:15.029765Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 13 01:26:15.036021 waagent[1918]: 2024-12-13T01:26:15.035972Z INFO Daemon Dec 13 01:26:15.038541 waagent[1918]: 2024-12-13T01:26:15.038496Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 13 01:26:15.048760 waagent[1918]: 2024-12-13T01:26:15.048723Z INFO Daemon Daemon Downloading artifacts profile blob Dec 13 01:26:15.140082 waagent[1918]: 2024-12-13T01:26:15.139898Z INFO Daemon Downloaded certificate {'thumbprint': 'E11EA6F8F5B6246211F56F30CCCF568741447E2A', 'hasPrivateKey': False} Dec 13 01:26:15.149999 waagent[1918]: 2024-12-13T01:26:15.149949Z INFO Daemon Downloaded certificate {'thumbprint': '2E0554C021B5405497F31D4F747D7E89DA11AD3B', 'hasPrivateKey': True} Dec 13 01:26:15.161076 waagent[1918]: 2024-12-13T01:26:15.160198Z INFO Daemon Fetch goal state completed Dec 13 01:26:15.171973 waagent[1918]: 2024-12-13T01:26:15.171898Z INFO Daemon Daemon Starting provisioning Dec 13 01:26:15.176906 waagent[1918]: 2024-12-13T01:26:15.176847Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 01:26:15.181365 waagent[1918]: 2024-12-13T01:26:15.181317Z INFO Daemon Daemon Set hostname [ci-4081.2.1-a-16a3da9678] Dec 13 01:26:15.215077 waagent[1918]: 2024-12-13T01:26:15.214964Z INFO Daemon Daemon Publish hostname [ci-4081.2.1-a-16a3da9678] Dec 13 01:26:15.221660 waagent[1918]: 2024-12-13T01:26:15.221232Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 01:26:15.227468 waagent[1918]: 2024-12-13T01:26:15.227373Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 01:26:15.252940 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:15.252948 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:26:15.252993 systemd-networkd[1370]: eth0: DHCP lease lost Dec 13 01:26:15.255069 waagent[1918]: 2024-12-13T01:26:15.254234Z INFO Daemon Daemon Create user account if not exists Dec 13 01:26:15.259556 systemd-networkd[1370]: eth0: DHCPv6 lease lost Dec 13 01:26:15.260234 waagent[1918]: 2024-12-13T01:26:15.260175Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 01:26:15.265550 waagent[1918]: 2024-12-13T01:26:15.265497Z INFO Daemon Daemon Configure sudoer Dec 13 01:26:15.269977 waagent[1918]: 2024-12-13T01:26:15.269922Z INFO Daemon Daemon Configure sshd Dec 13 01:26:15.274280 waagent[1918]: 2024-12-13T01:26:15.274230Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 13 01:26:15.286009 waagent[1918]: 2024-12-13T01:26:15.285943Z INFO Daemon Daemon Deploy ssh public key. Dec 13 01:26:15.296110 systemd-networkd[1370]: eth0: DHCPv4 address 10.200.20.42/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:26:16.427568 waagent[1918]: 2024-12-13T01:26:16.422830Z INFO Daemon Daemon Provisioning complete Dec 13 01:26:16.441961 waagent[1918]: 2024-12-13T01:26:16.441910Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 01:26:16.447865 waagent[1918]: 2024-12-13T01:26:16.447805Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 01:26:16.457796 waagent[1918]: 2024-12-13T01:26:16.457741Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Dec 13 01:26:16.586515 waagent[1999]: 2024-12-13T01:26:16.585861Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Dec 13 01:26:16.586515 waagent[1999]: 2024-12-13T01:26:16.586009Z INFO ExtHandler ExtHandler OS: flatcar 4081.2.1 Dec 13 01:26:16.586515 waagent[1999]: 2024-12-13T01:26:16.586091Z INFO ExtHandler ExtHandler Python: 3.11.9 Dec 13 01:26:16.626324 waagent[1999]: 2024-12-13T01:26:16.626244Z INFO ExtHandler ExtHandler Distro: flatcar-4081.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 01:26:16.626635 waagent[1999]: 2024-12-13T01:26:16.626599Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:26:16.626773 waagent[1999]: 2024-12-13T01:26:16.626739Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:26:16.635319 waagent[1999]: 2024-12-13T01:26:16.635258Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 01:26:16.640934 waagent[1999]: 2024-12-13T01:26:16.640893Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 01:26:16.641550 waagent[1999]: 2024-12-13T01:26:16.641508Z INFO ExtHandler Dec 13 01:26:16.641706 waagent[1999]: 2024-12-13T01:26:16.641674Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 3b624c19-6a71-443a-9d1c-67c95bbcd0a5 eTag: 9867055094232432487 source: Fabric] Dec 13 01:26:16.642098 waagent[1999]: 2024-12-13T01:26:16.642034Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 01:26:16.643080 waagent[1999]: 2024-12-13T01:26:16.642695Z INFO ExtHandler Dec 13 01:26:16.643080 waagent[1999]: 2024-12-13T01:26:16.642765Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 01:26:16.646693 waagent[1999]: 2024-12-13T01:26:16.646657Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 01:26:16.723196 waagent[1999]: 2024-12-13T01:26:16.723032Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E11EA6F8F5B6246211F56F30CCCF568741447E2A', 'hasPrivateKey': False} Dec 13 01:26:16.723557 waagent[1999]: 2024-12-13T01:26:16.723512Z INFO ExtHandler Downloaded certificate {'thumbprint': '2E0554C021B5405497F31D4F747D7E89DA11AD3B', 'hasPrivateKey': True} Dec 13 01:26:16.723947 waagent[1999]: 2024-12-13T01:26:16.723905Z INFO ExtHandler Fetch goal state completed Dec 13 01:26:16.740549 waagent[1999]: 2024-12-13T01:26:16.740485Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1999 Dec 13 01:26:16.740710 waagent[1999]: 2024-12-13T01:26:16.740671Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 13 01:26:16.742341 waagent[1999]: 2024-12-13T01:26:16.742295Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.2.1', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 01:26:16.742725 waagent[1999]: 2024-12-13T01:26:16.742688Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 01:26:16.783659 waagent[1999]: 2024-12-13T01:26:16.783613Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 01:26:16.783865 waagent[1999]: 2024-12-13T01:26:16.783824Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 01:26:16.790311 waagent[1999]: 2024-12-13T01:26:16.790266Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 01:26:16.796829 systemd[1]: Reloading requested from client PID 2014 ('systemctl') (unit waagent.service)... Dec 13 01:26:16.797129 systemd[1]: Reloading... Dec 13 01:26:16.864123 zram_generator::config[2048]: No configuration found. Dec 13 01:26:16.976070 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:17.049583 systemd[1]: Reloading finished in 252 ms. Dec 13 01:26:17.076279 waagent[1999]: 2024-12-13T01:26:17.076165Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Dec 13 01:26:17.081880 systemd[1]: Reloading requested from client PID 2107 ('systemctl') (unit waagent.service)... Dec 13 01:26:17.081906 systemd[1]: Reloading... Dec 13 01:26:17.147392 zram_generator::config[2140]: No configuration found. Dec 13 01:26:17.259644 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:17.332801 systemd[1]: Reloading finished in 250 ms. Dec 13 01:26:17.357666 waagent[1999]: 2024-12-13T01:26:17.356872Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 13 01:26:17.357666 waagent[1999]: 2024-12-13T01:26:17.357039Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 13 01:26:17.751363 waagent[1999]: 2024-12-13T01:26:17.750191Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 01:26:17.751363 waagent[1999]: 2024-12-13T01:26:17.750778Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Dec 13 01:26:17.751731 waagent[1999]: 2024-12-13T01:26:17.751580Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:26:17.751731 waagent[1999]: 2024-12-13T01:26:17.751664Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:26:17.751901 waagent[1999]: 2024-12-13T01:26:17.751852Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 01:26:17.752004 waagent[1999]: 2024-12-13T01:26:17.751951Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 01:26:17.752161 waagent[1999]: 2024-12-13T01:26:17.752110Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 01:26:17.752161 waagent[1999]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 01:26:17.752161 waagent[1999]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 01:26:17.752161 waagent[1999]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 01:26:17.752161 waagent[1999]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:26:17.752161 waagent[1999]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:26:17.752161 waagent[1999]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:26:17.752774 waagent[1999]: 2024-12-13T01:26:17.752722Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:26:17.752875 waagent[1999]: 2024-12-13T01:26:17.752841Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 01:26:17.753302 waagent[1999]: 2024-12-13T01:26:17.753239Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 01:26:17.753472 waagent[1999]: 2024-12-13T01:26:17.753428Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 01:26:17.753626 waagent[1999]: 2024-12-13T01:26:17.753587Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:26:17.754062 waagent[1999]: 2024-12-13T01:26:17.753991Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 01:26:17.754216 waagent[1999]: 2024-12-13T01:26:17.754173Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 01:26:17.754388 waagent[1999]: 2024-12-13T01:26:17.754339Z INFO EnvHandler ExtHandler Configure routes Dec 13 01:26:17.754438 waagent[1999]: 2024-12-13T01:26:17.754410Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 01:26:17.755103 waagent[1999]: 2024-12-13T01:26:17.755020Z INFO EnvHandler ExtHandler Gateway:None Dec 13 01:26:17.756119 waagent[1999]: 2024-12-13T01:26:17.756026Z INFO EnvHandler ExtHandler Routes:None Dec 13 01:26:17.761283 waagent[1999]: 2024-12-13T01:26:17.761238Z INFO ExtHandler ExtHandler Dec 13 01:26:17.761812 waagent[1999]: 2024-12-13T01:26:17.761771Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 815f9b02-acdf-46ba-9727-78b607abc4a5 correlation 26abaf94-f3d6-439f-bea2-fabcdde8cfa6 created: 2024-12-13T01:25:10.388863Z] Dec 13 01:26:17.763078 waagent[1999]: 2024-12-13T01:26:17.763017Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 01:26:17.763726 waagent[1999]: 2024-12-13T01:26:17.763689Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Dec 13 01:26:17.799363 waagent[1999]: 2024-12-13T01:26:17.799297Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 01:26:17.799363 waagent[1999]: Executing ['ip', '-a', '-o', 'link']: Dec 13 01:26:17.799363 waagent[1999]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 01:26:17.799363 waagent[1999]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:56:67 brd ff:ff:ff:ff:ff:ff Dec 13 01:26:17.799363 waagent[1999]: 3: enP4997s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:56:67 brd ff:ff:ff:ff:ff:ff\ altname enP4997p0s2 Dec 13 01:26:17.799363 waagent[1999]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 01:26:17.799363 waagent[1999]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 01:26:17.799363 waagent[1999]: 2: eth0 inet 10.200.20.42/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 01:26:17.799363 waagent[1999]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 01:26:17.799363 waagent[1999]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 13 01:26:17.799363 waagent[1999]: 2: eth0 inet6 fe80::20d:3aff:fef6:5667/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 01:26:17.799363 waagent[1999]: 3: enP4997s1 inet6 fe80::20d:3aff:fef6:5667/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 01:26:17.801037 waagent[1999]: 2024-12-13T01:26:17.800991Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9F6AF55A-75DE-451F-86AE-39A625D8DCED;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Dec 13 01:26:17.855079 waagent[1999]: 2024-12-13T01:26:17.854874Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Dec 13 01:26:17.855079 waagent[1999]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:17.855079 waagent[1999]: pkts bytes target prot opt in out source destination Dec 13 01:26:17.855079 waagent[1999]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:17.855079 waagent[1999]: pkts bytes target prot opt in out source destination Dec 13 01:26:17.855079 waagent[1999]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:17.855079 waagent[1999]: pkts bytes target prot opt in out source destination Dec 13 01:26:17.855079 waagent[1999]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:26:17.855079 waagent[1999]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:26:17.855079 waagent[1999]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:26:17.857768 waagent[1999]: 2024-12-13T01:26:17.857708Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 01:26:17.857768 waagent[1999]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:17.857768 waagent[1999]: pkts bytes target prot opt in out source destination Dec 13 01:26:17.857768 waagent[1999]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:17.857768 waagent[1999]: pkts bytes target prot opt in out source destination Dec 13 01:26:17.857768 waagent[1999]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:17.857768 waagent[1999]: pkts bytes target prot opt in out source destination Dec 13 01:26:17.857768 waagent[1999]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:26:17.857768 waagent[1999]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:26:17.857768 waagent[1999]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:26:17.857991 waagent[1999]: 2024-12-13T01:26:17.857960Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 01:26:23.361875 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:26:23.369212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:23.463417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:23.465802 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:23.569588 kubelet[2243]: E1213 01:26:23.569541 2243 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:23.572978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:23.573180 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:33.612039 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:26:33.622532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:33.716315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:33.720587 (kubelet)[2265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:33.803724 kubelet[2265]: E1213 01:26:33.803678 2265 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:33.807271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:33.807433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:35.388187 chronyd[1748]: Selected source PHC0 Dec 13 01:26:43.861994 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:26:43.869421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:43.965028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:43.968312 (kubelet)[2287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:44.010639 kubelet[2287]: E1213 01:26:44.010572 2287 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:44.014312 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:44.014614 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:47.244960 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:26:47.259360 systemd[1]: Started sshd@0-10.200.20.42:22-10.200.16.10:33280.service - OpenSSH per-connection server daemon (10.200.16.10:33280). Dec 13 01:26:49.642948 sshd[2296]: Accepted publickey for core from 10.200.16.10 port 33280 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:26:49.644260 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:49.648409 systemd-logind[1774]: New session 3 of user core. Dec 13 01:26:49.655416 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:26:50.049278 systemd[1]: Started sshd@1-10.200.20.42:22-10.200.16.10:42468.service - OpenSSH per-connection server daemon (10.200.16.10:42468). Dec 13 01:26:50.482885 sshd[2301]: Accepted publickey for core from 10.200.16.10 port 42468 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:26:50.484217 sshd[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:50.487857 systemd-logind[1774]: New session 4 of user core. Dec 13 01:26:50.495372 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:26:50.817263 sshd[2301]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:50.821466 systemd[1]: sshd@1-10.200.20.42:22-10.200.16.10:42468.service: Deactivated successfully. Dec 13 01:26:50.823233 systemd-logind[1774]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:26:50.824153 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:26:50.825330 systemd-logind[1774]: Removed session 4. Dec 13 01:26:50.893279 systemd[1]: Started sshd@2-10.200.20.42:22-10.200.16.10:42482.service - OpenSSH per-connection server daemon (10.200.16.10:42482). Dec 13 01:26:51.315070 sshd[2309]: Accepted publickey for core from 10.200.16.10 port 42482 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:26:51.316361 sshd[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:51.320106 systemd-logind[1774]: New session 5 of user core. Dec 13 01:26:51.330407 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:26:51.635242 sshd[2309]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:51.638362 systemd[1]: sshd@2-10.200.20.42:22-10.200.16.10:42482.service: Deactivated successfully. Dec 13 01:26:51.641087 systemd-logind[1774]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:26:51.641635 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:26:51.642689 systemd-logind[1774]: Removed session 5. Dec 13 01:26:51.711262 systemd[1]: Started sshd@3-10.200.20.42:22-10.200.16.10:42486.service - OpenSSH per-connection server daemon (10.200.16.10:42486). Dec 13 01:26:52.139518 sshd[2317]: Accepted publickey for core from 10.200.16.10 port 42486 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:26:52.140829 sshd[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:52.144545 systemd-logind[1774]: New session 6 of user core. Dec 13 01:26:52.154342 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:26:52.471267 sshd[2317]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:52.474891 systemd[1]: sshd@3-10.200.20.42:22-10.200.16.10:42486.service: Deactivated successfully. Dec 13 01:26:52.477720 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:26:52.478550 systemd-logind[1774]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:26:52.479516 systemd-logind[1774]: Removed session 6. Dec 13 01:26:52.545264 systemd[1]: Started sshd@4-10.200.20.42:22-10.200.16.10:42488.service - OpenSSH per-connection server daemon (10.200.16.10:42488). Dec 13 01:26:52.971879 sshd[2325]: Accepted publickey for core from 10.200.16.10 port 42488 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:26:52.973202 sshd[2325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:52.977371 systemd-logind[1774]: New session 7 of user core. Dec 13 01:26:52.983348 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:26:53.342651 sudo[2329]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:26:53.342930 sudo[2329]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:53.662755 sudo[2329]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:53.731985 sshd[2325]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:53.736615 systemd-logind[1774]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:26:53.737249 systemd[1]: sshd@4-10.200.20.42:22-10.200.16.10:42488.service: Deactivated successfully. Dec 13 01:26:53.738655 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:26:53.740501 systemd-logind[1774]: Removed session 7. Dec 13 01:26:53.813278 systemd[1]: Started sshd@5-10.200.20.42:22-10.200.16.10:42490.service - OpenSSH per-connection server daemon (10.200.16.10:42490). Dec 13 01:26:54.111893 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:26:54.122221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:54.238879 sshd[2334]: Accepted publickey for core from 10.200.16.10 port 42490 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:26:54.240650 sshd[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:54.244942 systemd-logind[1774]: New session 8 of user core. Dec 13 01:26:54.251339 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:26:54.326393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:54.329033 (kubelet)[2350]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:54.367902 kubelet[2350]: E1213 01:26:54.367725 2350 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:54.370390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:54.370568 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:54.430280 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 13 01:26:54.486814 sudo[2360]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:26:54.487530 sudo[2360]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:54.490721 sudo[2360]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:54.495566 sudo[2359]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:26:54.495824 sudo[2359]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:54.513269 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:26:54.516035 auditctl[2363]: No rules Dec 13 01:26:54.515655 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:26:54.515871 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:26:54.519445 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:26:54.543553 augenrules[2382]: No rules Dec 13 01:26:54.545269 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:26:54.546799 sudo[2359]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:54.616269 sshd[2334]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:54.620353 systemd-logind[1774]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:26:54.620745 systemd[1]: sshd@5-10.200.20.42:22-10.200.16.10:42490.service: Deactivated successfully. Dec 13 01:26:54.623353 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:26:54.624443 systemd-logind[1774]: Removed session 8. Dec 13 01:26:54.695296 systemd[1]: Started sshd@6-10.200.20.42:22-10.200.16.10:42506.service - OpenSSH per-connection server daemon (10.200.16.10:42506). Dec 13 01:26:55.124690 sshd[2391]: Accepted publickey for core from 10.200.16.10 port 42506 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:26:55.125964 sshd[2391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:55.129983 systemd-logind[1774]: New session 9 of user core. Dec 13 01:26:55.140281 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:26:55.373649 sudo[2395]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:26:55.373915 sudo[2395]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:56.210270 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:26:56.211263 (dockerd)[2411]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:26:56.961244 dockerd[2411]: time="2024-12-13T01:26:56.961186724Z" level=info msg="Starting up" Dec 13 01:26:57.139434 update_engine[1779]: I20241213 01:26:57.139348 1779 update_attempter.cc:509] Updating boot flags... Dec 13 01:26:57.229077 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (2432) Dec 13 01:26:57.370699 dockerd[2411]: time="2024-12-13T01:26:57.370635894Z" level=info msg="Loading containers: start." Dec 13 01:26:57.531082 kernel: Initializing XFRM netlink socket Dec 13 01:26:57.642969 systemd-networkd[1370]: docker0: Link UP Dec 13 01:26:57.668436 dockerd[2411]: time="2024-12-13T01:26:57.668391066Z" level=info msg="Loading containers: done." Dec 13 01:26:57.680492 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck376425950-merged.mount: Deactivated successfully. Dec 13 01:26:57.688688 dockerd[2411]: time="2024-12-13T01:26:57.688637193Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:26:57.688878 dockerd[2411]: time="2024-12-13T01:26:57.688852672Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:26:57.689032 dockerd[2411]: time="2024-12-13T01:26:57.689009872Z" level=info msg="Daemon has completed initialization" Dec 13 01:26:57.742500 dockerd[2411]: time="2024-12-13T01:26:57.742422160Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:26:57.743102 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:26:59.103527 containerd[1829]: time="2024-12-13T01:26:59.103190757Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:27:00.131944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4224947770.mount: Deactivated successfully. Dec 13 01:27:01.581658 containerd[1829]: time="2024-12-13T01:27:01.581605770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:01.583754 containerd[1829]: time="2024-12-13T01:27:01.583717802Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201250" Dec 13 01:27:01.586571 containerd[1829]: time="2024-12-13T01:27:01.586519873Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:01.594307 containerd[1829]: time="2024-12-13T01:27:01.593745207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:01.594676 containerd[1829]: time="2024-12-13T01:27:01.594634604Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.491400567s" Dec 13 01:27:01.594724 containerd[1829]: time="2024-12-13T01:27:01.594677124Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 01:27:01.614304 containerd[1829]: time="2024-12-13T01:27:01.614265655Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:27:03.115285 containerd[1829]: time="2024-12-13T01:27:03.115209451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:03.118549 containerd[1829]: time="2024-12-13T01:27:03.118324800Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381297" Dec 13 01:27:03.122255 containerd[1829]: time="2024-12-13T01:27:03.122228586Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:03.128842 containerd[1829]: time="2024-12-13T01:27:03.128790523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:03.130704 containerd[1829]: time="2024-12-13T01:27:03.130535037Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.516035783s" Dec 13 01:27:03.130704 containerd[1829]: time="2024-12-13T01:27:03.130576237Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 01:27:03.151307 containerd[1829]: time="2024-12-13T01:27:03.151202804Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:27:04.197092 containerd[1829]: time="2024-12-13T01:27:04.196079246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:04.198723 containerd[1829]: time="2024-12-13T01:27:04.198671277Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765640" Dec 13 01:27:04.204993 containerd[1829]: time="2024-12-13T01:27:04.204927295Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:04.214079 containerd[1829]: time="2024-12-13T01:27:04.213020546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:04.214281 containerd[1829]: time="2024-12-13T01:27:04.214239822Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.062993338s" Dec 13 01:27:04.214368 containerd[1829]: time="2024-12-13T01:27:04.214351901Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 01:27:04.235127 containerd[1829]: time="2024-12-13T01:27:04.235091348Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:27:04.611886 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:27:04.617263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:04.716270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:04.727415 (kubelet)[2677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:27:04.768684 kubelet[2677]: E1213 01:27:04.768634 2677 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:27:04.772261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:27:04.772572 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:27:05.703397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2576876136.mount: Deactivated successfully. Dec 13 01:27:06.304090 containerd[1829]: time="2024-12-13T01:27:06.304024185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:06.307334 containerd[1829]: time="2024-12-13T01:27:06.307140934Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Dec 13 01:27:06.309885 containerd[1829]: time="2024-12-13T01:27:06.309836844Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:06.313619 containerd[1829]: time="2024-12-13T01:27:06.313549711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:06.314226 containerd[1829]: time="2024-12-13T01:27:06.314097389Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 2.078969801s" Dec 13 01:27:06.314226 containerd[1829]: time="2024-12-13T01:27:06.314135029Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 01:27:06.335655 containerd[1829]: time="2024-12-13T01:27:06.335561474Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:27:07.086547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4273108521.mount: Deactivated successfully. Dec 13 01:27:09.749974 containerd[1829]: time="2024-12-13T01:27:09.749907005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:09.753443 containerd[1829]: time="2024-12-13T01:27:09.753181596Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Dec 13 01:27:09.756538 containerd[1829]: time="2024-12-13T01:27:09.756458667Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:09.762616 containerd[1829]: time="2024-12-13T01:27:09.762530651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:09.763822 containerd[1829]: time="2024-12-13T01:27:09.763784128Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 3.428184894s" Dec 13 01:27:09.763870 containerd[1829]: time="2024-12-13T01:27:09.763825047Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:27:09.784722 containerd[1829]: time="2024-12-13T01:27:09.784678392Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:27:10.371617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1379960398.mount: Deactivated successfully. Dec 13 01:27:10.399077 containerd[1829]: time="2024-12-13T01:27:10.398847027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:10.402042 containerd[1829]: time="2024-12-13T01:27:10.402006379Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Dec 13 01:27:10.408062 containerd[1829]: time="2024-12-13T01:27:10.408026683Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:10.412812 containerd[1829]: time="2024-12-13T01:27:10.412763310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:10.413570 containerd[1829]: time="2024-12-13T01:27:10.413439868Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 628.565997ms" Dec 13 01:27:10.413570 containerd[1829]: time="2024-12-13T01:27:10.413476988Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 01:27:10.432555 containerd[1829]: time="2024-12-13T01:27:10.432509417Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:27:11.142785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1786701713.mount: Deactivated successfully. Dec 13 01:27:12.874437 containerd[1829]: time="2024-12-13T01:27:12.874385343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:12.879149 containerd[1829]: time="2024-12-13T01:27:12.878894207Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Dec 13 01:27:12.941165 containerd[1829]: time="2024-12-13T01:27:12.941131149Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:12.978395 containerd[1829]: time="2024-12-13T01:27:12.978354498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:12.982070 containerd[1829]: time="2024-12-13T01:27:12.980976049Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.548428432s" Dec 13 01:27:12.982070 containerd[1829]: time="2024-12-13T01:27:12.981016249Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 01:27:14.862603 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 01:27:14.871318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:14.973191 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:14.982377 (kubelet)[2871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:27:15.026897 kubelet[2871]: E1213 01:27:15.026850 2871 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:27:15.031713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:27:15.031911 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:27:17.667503 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:17.676360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:17.706438 systemd[1]: Reloading requested from client PID 2891 ('systemctl') (unit session-9.scope)... Dec 13 01:27:17.706589 systemd[1]: Reloading... Dec 13 01:27:17.786096 zram_generator::config[2929]: No configuration found. Dec 13 01:27:17.916411 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:17.988550 systemd[1]: Reloading finished in 281 ms. Dec 13 01:27:18.026224 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:27:18.026472 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:27:18.026868 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:18.032282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:18.169225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:18.176387 (kubelet)[3010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:27:18.214091 kubelet[3010]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:18.214091 kubelet[3010]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:27:18.214091 kubelet[3010]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:18.214457 kubelet[3010]: I1213 01:27:18.214206 3010 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:27:19.321031 kubelet[3010]: I1213 01:27:19.320996 3010 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:27:19.322734 kubelet[3010]: I1213 01:27:19.321452 3010 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:27:19.322734 kubelet[3010]: I1213 01:27:19.321717 3010 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:27:19.336966 kubelet[3010]: E1213 01:27:19.336918 3010 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.42:6443: connect: connection refused Dec 13 01:27:19.337476 kubelet[3010]: I1213 01:27:19.337451 3010 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:19.349606 kubelet[3010]: I1213 01:27:19.349569 3010 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:27:19.351199 kubelet[3010]: I1213 01:27:19.351167 3010 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:27:19.351441 kubelet[3010]: I1213 01:27:19.351415 3010 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:27:19.351535 kubelet[3010]: I1213 01:27:19.351447 3010 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:27:19.351535 kubelet[3010]: I1213 01:27:19.351459 3010 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:27:19.351619 kubelet[3010]: I1213 01:27:19.351594 3010 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:19.353851 kubelet[3010]: I1213 01:27:19.353823 3010 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:27:19.353906 kubelet[3010]: I1213 01:27:19.353860 3010 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:27:19.353906 kubelet[3010]: I1213 01:27:19.353887 3010 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:27:19.353906 kubelet[3010]: I1213 01:27:19.353905 3010 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:27:19.356602 kubelet[3010]: W1213 01:27:19.356224 3010 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Dec 13 01:27:19.356602 kubelet[3010]: E1213 01:27:19.356276 3010 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Dec 13 01:27:19.356602 kubelet[3010]: W1213 01:27:19.356553 3010 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-16a3da9678&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Dec 13 01:27:19.356602 kubelet[3010]: E1213 01:27:19.356580 3010 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-16a3da9678&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Dec 13 01:27:19.357594 kubelet[3010]: I1213 01:27:19.357173 3010 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:27:19.357594 kubelet[3010]: I1213 01:27:19.357478 3010 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:27:19.358011 kubelet[3010]: W1213 01:27:19.357992 3010 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:27:19.358937 kubelet[3010]: I1213 01:27:19.358915 3010 server.go:1256] "Started kubelet" Dec 13 01:27:19.359336 kubelet[3010]: I1213 01:27:19.359291 3010 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:27:19.360089 kubelet[3010]: I1213 01:27:19.360068 3010 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:27:19.361367 kubelet[3010]: I1213 01:27:19.361344 3010 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:27:19.361976 kubelet[3010]: I1213 01:27:19.361715 3010 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:27:19.363335 kubelet[3010]: I1213 01:27:19.363249 3010 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:27:19.364229 kubelet[3010]: E1213 01:27:19.364012 3010 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.42:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-a-16a3da9678.1810983bd99a1784 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-a-16a3da9678,UID:ci-4081.2.1-a-16a3da9678,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-a-16a3da9678,},FirstTimestamp:2024-12-13 01:27:19.358887812 +0000 UTC m=+1.179327750,LastTimestamp:2024-12-13 01:27:19.358887812 +0000 UTC m=+1.179327750,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-a-16a3da9678,}" Dec 13 01:27:19.366644 kubelet[3010]: E1213 01:27:19.366614 3010 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-16a3da9678\" not found" Dec 13 01:27:19.366855 kubelet[3010]: I1213 01:27:19.366743 3010 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:27:19.367010 kubelet[3010]: I1213 01:27:19.366999 3010 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:27:19.367165 kubelet[3010]: I1213 01:27:19.367154 3010 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:27:19.367959 kubelet[3010]: W1213 01:27:19.367603 3010 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Dec 13 01:27:19.367959 kubelet[3010]: E1213 01:27:19.367645 3010 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Dec 13 01:27:19.368379 kubelet[3010]: E1213 01:27:19.368364 3010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-16a3da9678?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="200ms" Dec 13 01:27:19.368636 kubelet[3010]: E1213 01:27:19.368623 3010 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:27:19.369327 kubelet[3010]: I1213 01:27:19.369296 3010 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:27:19.369535 kubelet[3010]: I1213 01:27:19.369518 3010 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:27:19.371177 kubelet[3010]: I1213 01:27:19.371161 3010 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:27:19.382833 kubelet[3010]: I1213 01:27:19.382796 3010 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:27:19.384231 kubelet[3010]: I1213 01:27:19.384199 3010 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:27:19.384231 kubelet[3010]: I1213 01:27:19.384226 3010 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:27:19.384335 kubelet[3010]: I1213 01:27:19.384249 3010 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:27:19.384335 kubelet[3010]: E1213 01:27:19.384296 3010 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:27:19.401073 kubelet[3010]: W1213 01:27:19.400305 3010 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Dec 13 01:27:19.401073 kubelet[3010]: E1213 01:27:19.400361 3010 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Dec 13 01:27:19.447533 kubelet[3010]: I1213 01:27:19.447507 3010 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:27:19.447750 kubelet[3010]: I1213 01:27:19.447738 3010 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:27:19.447839 kubelet[3010]: I1213 01:27:19.447830 3010 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:19.458607 kubelet[3010]: I1213 01:27:19.458573 3010 policy_none.go:49] "None policy: Start" Dec 13 01:27:19.459636 kubelet[3010]: I1213 01:27:19.459579 3010 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:27:19.460163 kubelet[3010]: I1213 01:27:19.459778 3010 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:27:19.467088 kubelet[3010]: I1213 01:27:19.466787 3010 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:27:19.468320 kubelet[3010]: I1213 01:27:19.468294 3010 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:27:19.472014 kubelet[3010]: I1213 01:27:19.471982 3010 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-16a3da9678" Dec 13 01:27:19.472475 kubelet[3010]: E1213 01:27:19.472409 3010 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-4081.2.1-a-16a3da9678" Dec 13 01:27:19.473192 kubelet[3010]: E1213 01:27:19.473169 3010 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.1-a-16a3da9678\" not found" Dec 13 01:27:19.484613 kubelet[3010]: I1213 01:27:19.484587 3010 topology_manager.go:215] "Topology Admit Handler" podUID="1aafad819be4cc093455b6249d943ee4" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:19.486537 kubelet[3010]: I1213 01:27:19.486328 3010 topology_manager.go:215] "Topology Admit Handler" podUID="d61eb4817486371540112de22fd906b8" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:19.489466 kubelet[3010]: I1213 01:27:19.489178 3010 topology_manager.go:215] "Topology Admit Handler" podUID="a22294777cd0d586765804431a99557a" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:19.567873 kubelet[3010]: I1213 01:27:19.567837 3010 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a22294777cd0d586765804431a99557a-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-16a3da9678\" (UID: \"a22294777cd0d586765804431a99557a\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:19.568010 kubelet[3010]: I1213 01:27:19.567898 3010 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a22294777cd0d586765804431a99557a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-a-16a3da9678\" (UID: \"a22294777cd0d586765804431a99557a\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:19.568010 kubelet[3010]: I1213 01:27:19.567924 3010 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d61eb4817486371540112de22fd906b8-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-a-16a3da9678\" (UID: \"d61eb4817486371540112de22fd906b8\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:19.568010 kubelet[3010]: I1213 01:27:19.567944 3010 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d61eb4817486371540112de22fd906b8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-a-16a3da9678\" (UID: \"d61eb4817486371540112de22fd906b8\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:19.568010 kubelet[3010]: I1213 01:27:19.567975 3010 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a22294777cd0d586765804431a99557a-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-16a3da9678\" (UID: \"a22294777cd0d586765804431a99557a\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:19.568010 kubelet[3010]: I1213 01:27:19.567996 3010 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a22294777cd0d586765804431a99557a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-a-16a3da9678\" (UID: \"a22294777cd0d586765804431a99557a\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:19.568178 kubelet[3010]: I1213 01:27:19.568016 3010 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a22294777cd0d586765804431a99557a-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-a-16a3da9678\" (UID: \"a22294777cd0d586765804431a99557a\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:19.568178 kubelet[3010]: I1213 01:27:19.568036 3010 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1aafad819be4cc093455b6249d943ee4-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-a-16a3da9678\" (UID: \"1aafad819be4cc093455b6249d943ee4\") " pod="kube-system/kube-scheduler-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:19.568178 kubelet[3010]: I1213 01:27:19.568090 3010 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d61eb4817486371540112de22fd906b8-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-a-16a3da9678\" (UID: \"d61eb4817486371540112de22fd906b8\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:19.569114 kubelet[3010]: E1213 01:27:19.569092 3010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-16a3da9678?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="400ms" Dec 13 01:27:19.674418 kubelet[3010]: I1213 01:27:19.674347 3010 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-16a3da9678" Dec 13 01:27:19.674782 kubelet[3010]: E1213 01:27:19.674761 3010 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-4081.2.1-a-16a3da9678" Dec 13 01:27:19.792072 containerd[1829]: time="2024-12-13T01:27:19.791934164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-a-16a3da9678,Uid:1aafad819be4cc093455b6249d943ee4,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:19.795542 containerd[1829]: time="2024-12-13T01:27:19.795483552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-a-16a3da9678,Uid:d61eb4817486371540112de22fd906b8,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:19.798360 containerd[1829]: time="2024-12-13T01:27:19.798320662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-a-16a3da9678,Uid:a22294777cd0d586765804431a99557a,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:19.969974 kubelet[3010]: E1213 01:27:19.969872 3010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-16a3da9678?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="800ms" Dec 13 01:27:20.077085 kubelet[3010]: I1213 01:27:20.076845 3010 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-16a3da9678" Dec 13 01:27:20.077209 kubelet[3010]: E1213 01:27:20.077194 3010 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-4081.2.1-a-16a3da9678" Dec 13 01:27:20.309950 kubelet[3010]: W1213 01:27:20.309825 3010 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Dec 13 01:27:20.309950 kubelet[3010]: E1213 01:27:20.309877 3010 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Dec 13 01:27:20.501166 kubelet[3010]: W1213 01:27:20.500987 3010 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-16a3da9678&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Dec 13 01:27:20.501166 kubelet[3010]: E1213 01:27:20.501147 3010 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-16a3da9678&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Dec 13 01:27:20.532756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1743705780.mount: Deactivated successfully. Dec 13 01:27:20.558496 containerd[1829]: time="2024-12-13T01:27:20.558450051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:20.572302 containerd[1829]: time="2024-12-13T01:27:20.572159764Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 01:27:20.576668 containerd[1829]: time="2024-12-13T01:27:20.576624709Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:20.583073 containerd[1829]: time="2024-12-13T01:27:20.582481848Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:20.586330 containerd[1829]: time="2024-12-13T01:27:20.585576678Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:20.590521 containerd[1829]: time="2024-12-13T01:27:20.590175382Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:27:20.595200 containerd[1829]: time="2024-12-13T01:27:20.595160405Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:27:20.601747 containerd[1829]: time="2024-12-13T01:27:20.601222224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:20.602195 containerd[1829]: time="2024-12-13T01:27:20.602165341Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 810.154697ms" Dec 13 01:27:20.604943 containerd[1829]: time="2024-12-13T01:27:20.604681892Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 806.29059ms" Dec 13 01:27:20.623672 containerd[1829]: time="2024-12-13T01:27:20.623632187Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 828.068075ms" Dec 13 01:27:20.690384 kubelet[3010]: W1213 01:27:20.690342 3010 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Dec 13 01:27:20.690384 kubelet[3010]: E1213 01:27:20.690387 3010 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Dec 13 01:27:20.722941 kubelet[3010]: W1213 01:27:20.722883 3010 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Dec 13 01:27:20.722941 kubelet[3010]: E1213 01:27:20.722942 3010 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Dec 13 01:27:20.770671 kubelet[3010]: E1213 01:27:20.770632 3010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-16a3da9678?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="1.6s" Dec 13 01:27:20.879602 kubelet[3010]: I1213 01:27:20.879566 3010 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-16a3da9678" Dec 13 01:27:20.879935 kubelet[3010]: E1213 01:27:20.879916 3010 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-4081.2.1-a-16a3da9678" Dec 13 01:27:21.189441 containerd[1829]: time="2024-12-13T01:27:21.189264724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:21.189441 containerd[1829]: time="2024-12-13T01:27:21.189315924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:21.189441 containerd[1829]: time="2024-12-13T01:27:21.189350564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:21.190268 containerd[1829]: time="2024-12-13T01:27:21.189455443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:21.196137 containerd[1829]: time="2024-12-13T01:27:21.195956541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:21.196300 containerd[1829]: time="2024-12-13T01:27:21.196121101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:21.196388 containerd[1829]: time="2024-12-13T01:27:21.196269620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:21.197014 containerd[1829]: time="2024-12-13T01:27:21.196978218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:21.197401 containerd[1829]: time="2024-12-13T01:27:21.197349296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:21.197582 containerd[1829]: time="2024-12-13T01:27:21.197522656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:21.197737 containerd[1829]: time="2024-12-13T01:27:21.197563696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:21.197849 containerd[1829]: time="2024-12-13T01:27:21.197812055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:21.258934 containerd[1829]: time="2024-12-13T01:27:21.258864565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-a-16a3da9678,Uid:a22294777cd0d586765804431a99557a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ee328e0c72c11269816533956badd0f58986cd03488b8c0e61babdc40e10a78\"" Dec 13 01:27:21.266380 containerd[1829]: time="2024-12-13T01:27:21.266336259Z" level=info msg="CreateContainer within sandbox \"9ee328e0c72c11269816533956badd0f58986cd03488b8c0e61babdc40e10a78\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:27:21.268215 containerd[1829]: time="2024-12-13T01:27:21.268177133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-a-16a3da9678,Uid:d61eb4817486371540112de22fd906b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"18a7cee8b110f2451994d9576f15eeb2d4df9a7662e06384d992dd067432862a\"" Dec 13 01:27:21.274094 containerd[1829]: time="2024-12-13T01:27:21.274040433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-a-16a3da9678,Uid:1aafad819be4cc093455b6249d943ee4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c863ebf6176966e1bcd5a3d4507577c941fbebee563df8ed1036b9a3683f67d5\"" Dec 13 01:27:21.274892 containerd[1829]: time="2024-12-13T01:27:21.274830670Z" level=info msg="CreateContainer within sandbox \"18a7cee8b110f2451994d9576f15eeb2d4df9a7662e06384d992dd067432862a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:27:21.278783 containerd[1829]: time="2024-12-13T01:27:21.278740977Z" level=info msg="CreateContainer within sandbox \"c863ebf6176966e1bcd5a3d4507577c941fbebee563df8ed1036b9a3683f67d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:27:21.351976 containerd[1829]: time="2024-12-13T01:27:21.351855846Z" level=info msg="CreateContainer within sandbox \"9ee328e0c72c11269816533956badd0f58986cd03488b8c0e61babdc40e10a78\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fa83fa7de3562e624f650d02c16fb40ed2ac421091bdaa82da86fc18c685acea\"" Dec 13 01:27:21.352510 containerd[1829]: time="2024-12-13T01:27:21.352483043Z" level=info msg="StartContainer for \"fa83fa7de3562e624f650d02c16fb40ed2ac421091bdaa82da86fc18c685acea\"" Dec 13 01:27:21.355508 containerd[1829]: time="2024-12-13T01:27:21.355385913Z" level=info msg="CreateContainer within sandbox \"c863ebf6176966e1bcd5a3d4507577c941fbebee563df8ed1036b9a3683f67d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1b103e7ef2cff769f756b6db7d5d3616346908dd766a1f23e5ab6b6def8baee7\"" Dec 13 01:27:21.355954 containerd[1829]: time="2024-12-13T01:27:21.355923552Z" level=info msg="StartContainer for \"1b103e7ef2cff769f756b6db7d5d3616346908dd766a1f23e5ab6b6def8baee7\"" Dec 13 01:27:21.359732 containerd[1829]: time="2024-12-13T01:27:21.359694899Z" level=info msg="CreateContainer within sandbox \"18a7cee8b110f2451994d9576f15eeb2d4df9a7662e06384d992dd067432862a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fddc5c48b6dba3b3e93a5d8597ec0809cdda31c242687b2a487f2a8f84578a7e\"" Dec 13 01:27:21.360498 containerd[1829]: time="2024-12-13T01:27:21.360447416Z" level=info msg="StartContainer for \"fddc5c48b6dba3b3e93a5d8597ec0809cdda31c242687b2a487f2a8f84578a7e\"" Dec 13 01:27:21.470839 kubelet[3010]: E1213 01:27:21.470712 3010 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.42:6443: connect: connection refused Dec 13 01:27:21.474117 containerd[1829]: time="2024-12-13T01:27:21.474069266Z" level=info msg="StartContainer for \"fddc5c48b6dba3b3e93a5d8597ec0809cdda31c242687b2a487f2a8f84578a7e\" returns successfully" Dec 13 01:27:21.475326 containerd[1829]: time="2024-12-13T01:27:21.474200945Z" level=info msg="StartContainer for \"fa83fa7de3562e624f650d02c16fb40ed2ac421091bdaa82da86fc18c685acea\" returns successfully" Dec 13 01:27:21.479547 containerd[1829]: time="2024-12-13T01:27:21.479501967Z" level=info msg="StartContainer for \"1b103e7ef2cff769f756b6db7d5d3616346908dd766a1f23e5ab6b6def8baee7\" returns successfully" Dec 13 01:27:22.484273 kubelet[3010]: I1213 01:27:22.484243 3010 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-16a3da9678" Dec 13 01:27:23.240676 kubelet[3010]: E1213 01:27:23.240641 3010 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.1-a-16a3da9678\" not found" node="ci-4081.2.1-a-16a3da9678" Dec 13 01:27:23.275852 kubelet[3010]: I1213 01:27:23.275725 3010 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-a-16a3da9678" Dec 13 01:27:23.358517 kubelet[3010]: I1213 01:27:23.358473 3010 apiserver.go:52] "Watching apiserver" Dec 13 01:27:23.367340 kubelet[3010]: I1213 01:27:23.367308 3010 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:27:23.486811 kubelet[3010]: E1213 01:27:23.486341 3010 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-a-16a3da9678\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:23.486811 kubelet[3010]: E1213 01:27:23.486347 3010 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.2.1-a-16a3da9678\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:23.489057 kubelet[3010]: E1213 01:27:23.488197 3010 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.2.1-a-16a3da9678\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:24.472291 kubelet[3010]: W1213 01:27:24.472268 3010 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:25.954546 kubelet[3010]: W1213 01:27:25.954509 3010 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:26.333920 systemd[1]: Reloading requested from client PID 3280 ('systemctl') (unit session-9.scope)... Dec 13 01:27:26.333934 systemd[1]: Reloading... Dec 13 01:27:26.422119 zram_generator::config[3320]: No configuration found. Dec 13 01:27:26.537242 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:26.617075 systemd[1]: Reloading finished in 282 ms. Dec 13 01:27:26.646186 kubelet[3010]: I1213 01:27:26.646130 3010 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:26.646437 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:26.662192 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:27:26.662523 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:26.670312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:26.878327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:26.879538 (kubelet)[3394]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:27:26.944995 kubelet[3394]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:26.944995 kubelet[3394]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:27:26.944995 kubelet[3394]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:26.944995 kubelet[3394]: I1213 01:27:26.944955 3394 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:27:26.955904 kubelet[3394]: I1213 01:27:26.955660 3394 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:27:26.955904 kubelet[3394]: I1213 01:27:26.955700 3394 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:27:26.955904 kubelet[3394]: I1213 01:27:26.955897 3394 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:27:26.958735 kubelet[3394]: I1213 01:27:26.958693 3394 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:27:26.966111 kubelet[3394]: I1213 01:27:26.963534 3394 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:26.970409 kubelet[3394]: I1213 01:27:26.970343 3394 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:27:26.971183 kubelet[3394]: I1213 01:27:26.970760 3394 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:27:26.971183 kubelet[3394]: I1213 01:27:26.970922 3394 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:27:26.971183 kubelet[3394]: I1213 01:27:26.970944 3394 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:27:26.971183 kubelet[3394]: I1213 01:27:26.970952 3394 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:27:26.971183 kubelet[3394]: I1213 01:27:26.970986 3394 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:26.971183 kubelet[3394]: I1213 01:27:26.971105 3394 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:27:26.971449 kubelet[3394]: I1213 01:27:26.971123 3394 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:27:26.971449 kubelet[3394]: I1213 01:27:26.971147 3394 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:27:26.971449 kubelet[3394]: I1213 01:27:26.971162 3394 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:27:26.977436 kubelet[3394]: I1213 01:27:26.975251 3394 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:27:26.981102 kubelet[3394]: I1213 01:27:26.981066 3394 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:27:26.981955 kubelet[3394]: I1213 01:27:26.981935 3394 server.go:1256] "Started kubelet" Dec 13 01:27:26.994076 kubelet[3394]: I1213 01:27:26.990011 3394 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:27:26.998617 kubelet[3394]: I1213 01:27:26.998587 3394 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:27:27.012654 kubelet[3394]: I1213 01:27:27.012258 3394 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:27:27.015619 kubelet[3394]: I1213 01:27:27.015593 3394 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:27:27.015970 kubelet[3394]: I1213 01:27:27.015949 3394 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:27:27.016139 kubelet[3394]: I1213 01:27:27.016122 3394 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:27:27.018075 kubelet[3394]: I1213 01:27:27.018009 3394 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:27:27.018414 kubelet[3394]: I1213 01:27:27.018397 3394 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:27:27.022227 kubelet[3394]: I1213 01:27:27.022196 3394 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:27:27.023965 kubelet[3394]: I1213 01:27:27.023923 3394 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:27:27.023965 kubelet[3394]: I1213 01:27:27.023955 3394 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:27:27.023965 kubelet[3394]: I1213 01:27:27.023973 3394 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:27:27.024596 kubelet[3394]: E1213 01:27:27.024025 3394 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:27:27.031891 kubelet[3394]: I1213 01:27:27.031849 3394 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:27:27.032229 kubelet[3394]: I1213 01:27:27.031967 3394 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:27:27.036372 kubelet[3394]: I1213 01:27:27.035699 3394 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:27:27.095521 kubelet[3394]: I1213 01:27:27.095497 3394 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:27:27.095723 kubelet[3394]: I1213 01:27:27.095711 3394 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:27:27.095845 kubelet[3394]: I1213 01:27:27.095836 3394 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:27.096072 kubelet[3394]: I1213 01:27:27.096038 3394 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:27:27.096161 kubelet[3394]: I1213 01:27:27.096150 3394 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:27:27.096211 kubelet[3394]: I1213 01:27:27.096203 3394 policy_none.go:49] "None policy: Start" Dec 13 01:27:27.097315 kubelet[3394]: I1213 01:27:27.097288 3394 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:27:27.097315 kubelet[3394]: I1213 01:27:27.097322 3394 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:27:27.097610 kubelet[3394]: I1213 01:27:27.097581 3394 state_mem.go:75] "Updated machine memory state" Dec 13 01:27:27.098882 kubelet[3394]: I1213 01:27:27.098854 3394 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:27:27.099331 kubelet[3394]: I1213 01:27:27.099112 3394 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:27:27.117452 kubelet[3394]: I1213 01:27:27.117422 3394 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-16a3da9678" Dec 13 01:27:27.124155 kubelet[3394]: I1213 01:27:27.124121 3394 topology_manager.go:215] "Topology Admit Handler" podUID="d61eb4817486371540112de22fd906b8" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:27.124755 kubelet[3394]: I1213 01:27:27.124347 3394 topology_manager.go:215] "Topology Admit Handler" podUID="a22294777cd0d586765804431a99557a" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:27.125082 kubelet[3394]: I1213 01:27:27.125030 3394 topology_manager.go:215] "Topology Admit Handler" podUID="1aafad819be4cc093455b6249d943ee4" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:27.138813 kubelet[3394]: I1213 01:27:27.138420 3394 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.1-a-16a3da9678" Dec 13 01:27:27.139166 kubelet[3394]: I1213 01:27:27.139138 3394 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-a-16a3da9678" Dec 13 01:27:27.139471 kubelet[3394]: W1213 01:27:27.139364 3394 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:27.139564 kubelet[3394]: W1213 01:27:27.139414 3394 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:27.139697 kubelet[3394]: E1213 01:27:27.139686 3394 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-a-16a3da9678\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:27.140150 kubelet[3394]: E1213 01:27:27.140024 3394 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.2.1-a-16a3da9678\" already exists" pod="kube-system/kube-scheduler-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:27.140150 kubelet[3394]: W1213 01:27:27.140001 3394 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:27.193003 sudo[3423]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:27:27.193350 sudo[3423]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:27:27.217840 kubelet[3394]: I1213 01:27:27.217752 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d61eb4817486371540112de22fd906b8-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-a-16a3da9678\" (UID: \"d61eb4817486371540112de22fd906b8\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:27.218452 kubelet[3394]: I1213 01:27:27.218343 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d61eb4817486371540112de22fd906b8-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-a-16a3da9678\" (UID: \"d61eb4817486371540112de22fd906b8\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:27.218452 kubelet[3394]: I1213 01:27:27.218380 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d61eb4817486371540112de22fd906b8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-a-16a3da9678\" (UID: \"d61eb4817486371540112de22fd906b8\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:27.218452 kubelet[3394]: I1213 01:27:27.218402 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a22294777cd0d586765804431a99557a-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-a-16a3da9678\" (UID: \"a22294777cd0d586765804431a99557a\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:27.218771 kubelet[3394]: I1213 01:27:27.218598 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a22294777cd0d586765804431a99557a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-a-16a3da9678\" (UID: \"a22294777cd0d586765804431a99557a\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:27.218771 kubelet[3394]: I1213 01:27:27.218626 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1aafad819be4cc093455b6249d943ee4-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-a-16a3da9678\" (UID: \"1aafad819be4cc093455b6249d943ee4\") " pod="kube-system/kube-scheduler-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:27.218771 kubelet[3394]: I1213 01:27:27.218707 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a22294777cd0d586765804431a99557a-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-16a3da9678\" (UID: \"a22294777cd0d586765804431a99557a\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:27.218771 kubelet[3394]: I1213 01:27:27.218731 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a22294777cd0d586765804431a99557a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-a-16a3da9678\" (UID: \"a22294777cd0d586765804431a99557a\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:27.218771 kubelet[3394]: I1213 01:27:27.218752 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a22294777cd0d586765804431a99557a-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-16a3da9678\" (UID: \"a22294777cd0d586765804431a99557a\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:27.646381 sudo[3423]: pam_unix(sudo:session): session closed for user root Dec 13 01:27:27.973816 kubelet[3394]: I1213 01:27:27.973695 3394 apiserver.go:52] "Watching apiserver" Dec 13 01:27:28.016670 kubelet[3394]: I1213 01:27:28.016633 3394 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:27:28.079830 kubelet[3394]: W1213 01:27:28.079797 3394 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:28.080798 kubelet[3394]: E1213 01:27:28.080773 3394 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-a-16a3da9678\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.1-a-16a3da9678" Dec 13 01:27:28.123955 kubelet[3394]: I1213 01:27:28.123631 3394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.1-a-16a3da9678" podStartSLOduration=3.123570621 podStartE2EDuration="3.123570621s" podCreationTimestamp="2024-12-13 01:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:28.122767544 +0000 UTC m=+1.238484202" watchObservedRunningTime="2024-12-13 01:27:28.123570621 +0000 UTC m=+1.239287279" Dec 13 01:27:28.123955 kubelet[3394]: I1213 01:27:28.123728 3394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.1-a-16a3da9678" podStartSLOduration=4.12371062 podStartE2EDuration="4.12371062s" podCreationTimestamp="2024-12-13 01:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:28.109232709 +0000 UTC m=+1.224949367" watchObservedRunningTime="2024-12-13 01:27:28.12371062 +0000 UTC m=+1.239427278" Dec 13 01:27:29.479447 sudo[2395]: pam_unix(sudo:session): session closed for user root Dec 13 01:27:29.563448 sshd[2391]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:29.567939 systemd[1]: sshd@6-10.200.20.42:22-10.200.16.10:42506.service: Deactivated successfully. Dec 13 01:27:29.569899 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:27:29.571742 systemd-logind[1774]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:27:29.572701 systemd-logind[1774]: Removed session 9. Dec 13 01:27:33.928225 kubelet[3394]: I1213 01:27:33.927838 3394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.1-a-16a3da9678" podStartSLOduration=6.927798943 podStartE2EDuration="6.927798943s" podCreationTimestamp="2024-12-13 01:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:28.139146609 +0000 UTC m=+1.254863267" watchObservedRunningTime="2024-12-13 01:27:33.927798943 +0000 UTC m=+7.043515601" Dec 13 01:27:40.102103 kubelet[3394]: I1213 01:27:40.101962 3394 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:27:40.102579 containerd[1829]: time="2024-12-13T01:27:40.102425678Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:27:40.102942 kubelet[3394]: I1213 01:27:40.102599 3394 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:27:40.958073 kubelet[3394]: I1213 01:27:40.954073 3394 topology_manager.go:215] "Topology Admit Handler" podUID="4fbfec43-465c-4467-830c-279747552904" podNamespace="kube-system" podName="kube-proxy-mhxdj" Dec 13 01:27:40.962421 kubelet[3394]: I1213 01:27:40.962388 3394 topology_manager.go:215] "Topology Admit Handler" podUID="c16c7b09-3da7-4f50-8901-b2eaa675c671" podNamespace="kube-system" podName="cilium-zbmtk" Dec 13 01:27:40.998509 kubelet[3394]: I1213 01:27:40.998467 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-etc-cni-netd\") pod \"cilium-zbmtk\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " pod="kube-system/cilium-zbmtk" Dec 13 01:27:40.998509 kubelet[3394]: I1213 01:27:40.998514 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsm4p\" (UniqueName: \"kubernetes.io/projected/c16c7b09-3da7-4f50-8901-b2eaa675c671-kube-api-access-jsm4p\") pod \"cilium-zbmtk\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " pod="kube-system/cilium-zbmtk" Dec 13 01:27:40.998668 kubelet[3394]: I1213 01:27:40.998545 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-cni-path\") pod \"cilium-zbmtk\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " pod="kube-system/cilium-zbmtk" Dec 13 01:27:40.998668 kubelet[3394]: I1213 01:27:40.998566 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-host-proc-sys-net\") pod \"cilium-zbmtk\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " pod="kube-system/cilium-zbmtk" Dec 13 01:27:40.998668 kubelet[3394]: I1213 01:27:40.998589 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4fbfec43-465c-4467-830c-279747552904-lib-modules\") pod \"kube-proxy-mhxdj\" (UID: \"4fbfec43-465c-4467-830c-279747552904\") " pod="kube-system/kube-proxy-mhxdj" Dec 13 01:27:40.998668 kubelet[3394]: I1213 01:27:40.998609 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-hostproc\") pod \"cilium-zbmtk\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " pod="kube-system/cilium-zbmtk" Dec 13 01:27:40.998668 kubelet[3394]: I1213 01:27:40.998628 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-bpf-maps\") pod \"cilium-zbmtk\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " pod="kube-system/cilium-zbmtk" Dec 13 01:27:40.998668 kubelet[3394]: I1213 01:27:40.998647 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-host-proc-sys-kernel\") pod \"cilium-zbmtk\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " pod="kube-system/cilium-zbmtk" Dec 13 01:27:40.998802 kubelet[3394]: I1213 01:27:40.998668 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wthd\" (UniqueName: \"kubernetes.io/projected/4fbfec43-465c-4467-830c-279747552904-kube-api-access-9wthd\") pod \"kube-proxy-mhxdj\" (UID: \"4fbfec43-465c-4467-830c-279747552904\") " pod="kube-system/kube-proxy-mhxdj" Dec 13 01:27:40.998802 kubelet[3394]: I1213 01:27:40.998687 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-lib-modules\") pod \"cilium-zbmtk\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " pod="kube-system/cilium-zbmtk" Dec 13 01:27:40.998802 kubelet[3394]: I1213 01:27:40.998706 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c16c7b09-3da7-4f50-8901-b2eaa675c671-clustermesh-secrets\") pod \"cilium-zbmtk\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " pod="kube-system/cilium-zbmtk" Dec 13 01:27:40.998802 kubelet[3394]: I1213 01:27:40.998724 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-cilium-run\") pod \"cilium-zbmtk\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " pod="kube-system/cilium-zbmtk" Dec 13 01:27:40.998802 kubelet[3394]: I1213 01:27:40.998752 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c16c7b09-3da7-4f50-8901-b2eaa675c671-cilium-config-path\") pod \"cilium-zbmtk\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " pod="kube-system/cilium-zbmtk" Dec 13 01:27:40.998802 kubelet[3394]: I1213 01:27:40.998786 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c16c7b09-3da7-4f50-8901-b2eaa675c671-hubble-tls\") pod \"cilium-zbmtk\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " pod="kube-system/cilium-zbmtk" Dec 13 01:27:40.998926 kubelet[3394]: I1213 01:27:40.998805 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4fbfec43-465c-4467-830c-279747552904-kube-proxy\") pod \"kube-proxy-mhxdj\" (UID: \"4fbfec43-465c-4467-830c-279747552904\") " pod="kube-system/kube-proxy-mhxdj" Dec 13 01:27:40.998926 kubelet[3394]: I1213 01:27:40.998831 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fbfec43-465c-4467-830c-279747552904-xtables-lock\") pod \"kube-proxy-mhxdj\" (UID: \"4fbfec43-465c-4467-830c-279747552904\") " pod="kube-system/kube-proxy-mhxdj" Dec 13 01:27:40.998926 kubelet[3394]: I1213 01:27:40.998849 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-cilium-cgroup\") pod \"cilium-zbmtk\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " pod="kube-system/cilium-zbmtk" Dec 13 01:27:40.998926 kubelet[3394]: I1213 01:27:40.998868 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-xtables-lock\") pod \"cilium-zbmtk\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " pod="kube-system/cilium-zbmtk" Dec 13 01:27:41.049932 kubelet[3394]: I1213 01:27:41.049889 3394 topology_manager.go:215] "Topology Admit Handler" podUID="065264e6-568a-4535-9c70-1df4badd6557" podNamespace="kube-system" podName="cilium-operator-5cc964979-wgrlp" Dec 13 01:27:41.100086 kubelet[3394]: I1213 01:27:41.099745 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/065264e6-568a-4535-9c70-1df4badd6557-cilium-config-path\") pod \"cilium-operator-5cc964979-wgrlp\" (UID: \"065264e6-568a-4535-9c70-1df4badd6557\") " pod="kube-system/cilium-operator-5cc964979-wgrlp" Dec 13 01:27:41.100086 kubelet[3394]: I1213 01:27:41.099814 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v86m\" (UniqueName: \"kubernetes.io/projected/065264e6-568a-4535-9c70-1df4badd6557-kube-api-access-4v86m\") pod \"cilium-operator-5cc964979-wgrlp\" (UID: \"065264e6-568a-4535-9c70-1df4badd6557\") " pod="kube-system/cilium-operator-5cc964979-wgrlp" Dec 13 01:27:41.260978 containerd[1829]: time="2024-12-13T01:27:41.260832372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mhxdj,Uid:4fbfec43-465c-4467-830c-279747552904,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:41.266771 containerd[1829]: time="2024-12-13T01:27:41.266514389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zbmtk,Uid:c16c7b09-3da7-4f50-8901-b2eaa675c671,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:41.321632 containerd[1829]: time="2024-12-13T01:27:41.321451285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:41.321632 containerd[1829]: time="2024-12-13T01:27:41.321506685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:41.321632 containerd[1829]: time="2024-12-13T01:27:41.321539245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:41.321855 containerd[1829]: time="2024-12-13T01:27:41.321672724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:41.324121 containerd[1829]: time="2024-12-13T01:27:41.324024955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:41.324386 containerd[1829]: time="2024-12-13T01:27:41.324247394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:41.324386 containerd[1829]: time="2024-12-13T01:27:41.324263754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:41.324582 containerd[1829]: time="2024-12-13T01:27:41.324513233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:41.357884 containerd[1829]: time="2024-12-13T01:27:41.357806298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-wgrlp,Uid:065264e6-568a-4535-9c70-1df4badd6557,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:41.371816 containerd[1829]: time="2024-12-13T01:27:41.371568962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zbmtk,Uid:c16c7b09-3da7-4f50-8901-b2eaa675c671,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\"" Dec 13 01:27:41.374809 containerd[1829]: time="2024-12-13T01:27:41.374755949Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:27:41.379336 containerd[1829]: time="2024-12-13T01:27:41.379300290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mhxdj,Uid:4fbfec43-465c-4467-830c-279747552904,Namespace:kube-system,Attempt:0,} returns sandbox id \"e05950e4344e33014c726ded0c4dd52dbf89ad7a9d92ce32ea32af4d3279a109\"" Dec 13 01:27:41.383351 containerd[1829]: time="2024-12-13T01:27:41.383236834Z" level=info msg="CreateContainer within sandbox \"e05950e4344e33014c726ded0c4dd52dbf89ad7a9d92ce32ea32af4d3279a109\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:27:41.414779 containerd[1829]: time="2024-12-13T01:27:41.414135989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:41.414779 containerd[1829]: time="2024-12-13T01:27:41.414583027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:41.414779 containerd[1829]: time="2024-12-13T01:27:41.414596227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:41.414779 containerd[1829]: time="2024-12-13T01:27:41.414682826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:41.433954 containerd[1829]: time="2024-12-13T01:27:41.433908108Z" level=info msg="CreateContainer within sandbox \"e05950e4344e33014c726ded0c4dd52dbf89ad7a9d92ce32ea32af4d3279a109\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4509481253469c5767c2a81f0ff22608ec697fa9a7b71ff453a7b705765b4680\"" Dec 13 01:27:41.434694 containerd[1829]: time="2024-12-13T01:27:41.434585946Z" level=info msg="StartContainer for \"4509481253469c5767c2a81f0ff22608ec697fa9a7b71ff453a7b705765b4680\"" Dec 13 01:27:41.459950 containerd[1829]: time="2024-12-13T01:27:41.459900323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-wgrlp,Uid:065264e6-568a-4535-9c70-1df4badd6557,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4ebf63becae835817b527149e9a0cf3cba5e37068f180ca978b3c13ef79712c\"" Dec 13 01:27:41.496480 containerd[1829]: time="2024-12-13T01:27:41.496430414Z" level=info msg="StartContainer for \"4509481253469c5767c2a81f0ff22608ec697fa9a7b71ff453a7b705765b4680\" returns successfully" Dec 13 01:27:42.107954 kubelet[3394]: I1213 01:27:42.107840 3394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mhxdj" podStartSLOduration=2.107300252 podStartE2EDuration="2.107300252s" podCreationTimestamp="2024-12-13 01:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:42.106694415 +0000 UTC m=+15.222411073" watchObservedRunningTime="2024-12-13 01:27:42.107300252 +0000 UTC m=+15.223016910" Dec 13 01:27:46.512587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947886976.mount: Deactivated successfully. Dec 13 01:27:48.797299 containerd[1829]: time="2024-12-13T01:27:48.797032456Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:48.800506 containerd[1829]: time="2024-12-13T01:27:48.800476324Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650342" Dec 13 01:27:48.804017 containerd[1829]: time="2024-12-13T01:27:48.803889073Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:48.805370 containerd[1829]: time="2024-12-13T01:27:48.805254828Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.430432879s" Dec 13 01:27:48.805370 containerd[1829]: time="2024-12-13T01:27:48.805288868Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 01:27:48.807161 containerd[1829]: time="2024-12-13T01:27:48.806781063Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:27:48.808058 containerd[1829]: time="2024-12-13T01:27:48.808016578Z" level=info msg="CreateContainer within sandbox \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:27:48.833651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount371821653.mount: Deactivated successfully. Dec 13 01:27:48.842572 containerd[1829]: time="2024-12-13T01:27:48.842520901Z" level=info msg="CreateContainer within sandbox \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dfed549af4c863b403ad68e14453863f22214e5cb55fb801c8e8134f4f6aaf49\"" Dec 13 01:27:48.843035 containerd[1829]: time="2024-12-13T01:27:48.843008099Z" level=info msg="StartContainer for \"dfed549af4c863b403ad68e14453863f22214e5cb55fb801c8e8134f4f6aaf49\"" Dec 13 01:27:48.892383 containerd[1829]: time="2024-12-13T01:27:48.890650096Z" level=info msg="StartContainer for \"dfed549af4c863b403ad68e14453863f22214e5cb55fb801c8e8134f4f6aaf49\" returns successfully" Dec 13 01:27:49.830129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfed549af4c863b403ad68e14453863f22214e5cb55fb801c8e8134f4f6aaf49-rootfs.mount: Deactivated successfully. Dec 13 01:27:50.722546 containerd[1829]: time="2024-12-13T01:27:50.722487637Z" level=info msg="shim disconnected" id=dfed549af4c863b403ad68e14453863f22214e5cb55fb801c8e8134f4f6aaf49 namespace=k8s.io Dec 13 01:27:50.722546 containerd[1829]: time="2024-12-13T01:27:50.722546677Z" level=warning msg="cleaning up after shim disconnected" id=dfed549af4c863b403ad68e14453863f22214e5cb55fb801c8e8134f4f6aaf49 namespace=k8s.io Dec 13 01:27:50.722546 containerd[1829]: time="2024-12-13T01:27:50.722556277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:27:51.120962 containerd[1829]: time="2024-12-13T01:27:51.120927076Z" level=info msg="CreateContainer within sandbox \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:27:51.164543 containerd[1829]: time="2024-12-13T01:27:51.164434327Z" level=info msg="CreateContainer within sandbox \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a6d5df5daaa2ea48021fb85cf1a228540de0c92e1fe662a668340e537cc10a3a\"" Dec 13 01:27:51.165545 containerd[1829]: time="2024-12-13T01:27:51.165508963Z" level=info msg="StartContainer for \"a6d5df5daaa2ea48021fb85cf1a228540de0c92e1fe662a668340e537cc10a3a\"" Dec 13 01:27:51.214656 containerd[1829]: time="2024-12-13T01:27:51.214614196Z" level=info msg="StartContainer for \"a6d5df5daaa2ea48021fb85cf1a228540de0c92e1fe662a668340e537cc10a3a\" returns successfully" Dec 13 01:27:51.222688 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:27:51.223321 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:27:51.223389 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:27:51.230873 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:27:51.242022 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:27:51.253729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6d5df5daaa2ea48021fb85cf1a228540de0c92e1fe662a668340e537cc10a3a-rootfs.mount: Deactivated successfully. Dec 13 01:27:51.266094 containerd[1829]: time="2024-12-13T01:27:51.266008300Z" level=info msg="shim disconnected" id=a6d5df5daaa2ea48021fb85cf1a228540de0c92e1fe662a668340e537cc10a3a namespace=k8s.io Dec 13 01:27:51.266094 containerd[1829]: time="2024-12-13T01:27:51.266090140Z" level=warning msg="cleaning up after shim disconnected" id=a6d5df5daaa2ea48021fb85cf1a228540de0c92e1fe662a668340e537cc10a3a namespace=k8s.io Dec 13 01:27:51.266406 containerd[1829]: time="2024-12-13T01:27:51.266101660Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:27:52.122946 containerd[1829]: time="2024-12-13T01:27:52.122676373Z" level=info msg="CreateContainer within sandbox \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:27:52.179333 containerd[1829]: time="2024-12-13T01:27:52.179291899Z" level=info msg="CreateContainer within sandbox \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f3eec152724c6789bd9e9f6dac80553ecd3549b3bce4438c1130a25cfade2811\"" Dec 13 01:27:52.179932 containerd[1829]: time="2024-12-13T01:27:52.179906897Z" level=info msg="StartContainer for \"f3eec152724c6789bd9e9f6dac80553ecd3549b3bce4438c1130a25cfade2811\"" Dec 13 01:27:52.232837 containerd[1829]: time="2024-12-13T01:27:52.232795597Z" level=info msg="StartContainer for \"f3eec152724c6789bd9e9f6dac80553ecd3549b3bce4438c1130a25cfade2811\" returns successfully" Dec 13 01:27:52.249969 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3eec152724c6789bd9e9f6dac80553ecd3549b3bce4438c1130a25cfade2811-rootfs.mount: Deactivated successfully. Dec 13 01:27:52.260243 containerd[1829]: time="2024-12-13T01:27:52.260189503Z" level=info msg="shim disconnected" id=f3eec152724c6789bd9e9f6dac80553ecd3549b3bce4438c1130a25cfade2811 namespace=k8s.io Dec 13 01:27:52.260243 containerd[1829]: time="2024-12-13T01:27:52.260241783Z" level=warning msg="cleaning up after shim disconnected" id=f3eec152724c6789bd9e9f6dac80553ecd3549b3bce4438c1130a25cfade2811 namespace=k8s.io Dec 13 01:27:52.260445 containerd[1829]: time="2024-12-13T01:27:52.260253663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:27:52.269875 containerd[1829]: time="2024-12-13T01:27:52.269802630Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:27:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:27:53.129502 containerd[1829]: time="2024-12-13T01:27:53.129340733Z" level=info msg="CreateContainer within sandbox \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:27:53.172740 containerd[1829]: time="2024-12-13T01:27:53.172498306Z" level=info msg="CreateContainer within sandbox \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42\"" Dec 13 01:27:53.174249 containerd[1829]: time="2024-12-13T01:27:53.173619142Z" level=info msg="StartContainer for \"92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42\"" Dec 13 01:27:53.217387 systemd[1]: run-containerd-runc-k8s.io-92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42-runc.zkT5MG.mount: Deactivated successfully. Dec 13 01:27:53.256750 containerd[1829]: time="2024-12-13T01:27:53.256553819Z" level=info msg="StartContainer for \"92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42\" returns successfully" Dec 13 01:27:53.333454 containerd[1829]: time="2024-12-13T01:27:53.333132237Z" level=info msg="shim disconnected" id=92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42 namespace=k8s.io Dec 13 01:27:53.333833 containerd[1829]: time="2024-12-13T01:27:53.333803035Z" level=warning msg="cleaning up after shim disconnected" id=92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42 namespace=k8s.io Dec 13 01:27:53.333833 containerd[1829]: time="2024-12-13T01:27:53.333829515Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:27:53.549528 containerd[1829]: time="2024-12-13T01:27:53.549227579Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:53.551409 containerd[1829]: time="2024-12-13T01:27:53.551278172Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138306" Dec 13 01:27:53.554611 containerd[1829]: time="2024-12-13T01:27:53.554581040Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:53.556195 containerd[1829]: time="2024-12-13T01:27:53.556082435Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.749261092s" Dec 13 01:27:53.556195 containerd[1829]: time="2024-12-13T01:27:53.556118115Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 01:27:53.559101 containerd[1829]: time="2024-12-13T01:27:53.559043905Z" level=info msg="CreateContainer within sandbox \"f4ebf63becae835817b527149e9a0cf3cba5e37068f180ca978b3c13ef79712c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:27:53.585179 containerd[1829]: time="2024-12-13T01:27:53.585134376Z" level=info msg="CreateContainer within sandbox \"f4ebf63becae835817b527149e9a0cf3cba5e37068f180ca978b3c13ef79712c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a\"" Dec 13 01:27:53.586786 containerd[1829]: time="2024-12-13T01:27:53.585956973Z" level=info msg="StartContainer for \"2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a\"" Dec 13 01:27:53.628149 containerd[1829]: time="2024-12-13T01:27:53.628103749Z" level=info msg="StartContainer for \"2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a\" returns successfully" Dec 13 01:27:54.143463 containerd[1829]: time="2024-12-13T01:27:54.143419508Z" level=info msg="CreateContainer within sandbox \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:27:54.161326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42-rootfs.mount: Deactivated successfully. Dec 13 01:27:54.187226 containerd[1829]: time="2024-12-13T01:27:54.187156799Z" level=info msg="CreateContainer within sandbox \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea\"" Dec 13 01:27:54.190889 containerd[1829]: time="2024-12-13T01:27:54.188735473Z" level=info msg="StartContainer for \"e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea\"" Dec 13 01:27:54.192466 kubelet[3394]: I1213 01:27:54.192194 3394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-wgrlp" podStartSLOduration=1.096938985 podStartE2EDuration="13.192151582s" podCreationTimestamp="2024-12-13 01:27:41 +0000 UTC" firstStartedPulling="2024-12-13 01:27:41.461321637 +0000 UTC m=+14.577038295" lastFinishedPulling="2024-12-13 01:27:53.556534274 +0000 UTC m=+26.672250892" observedRunningTime="2024-12-13 01:27:54.186652841 +0000 UTC m=+27.302369499" watchObservedRunningTime="2024-12-13 01:27:54.192151582 +0000 UTC m=+27.307868400" Dec 13 01:27:54.244294 systemd[1]: run-containerd-runc-k8s.io-e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea-runc.EwWxny.mount: Deactivated successfully. Dec 13 01:27:54.333411 containerd[1829]: time="2024-12-13T01:27:54.333372259Z" level=info msg="StartContainer for \"e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea\" returns successfully" Dec 13 01:27:54.464405 kubelet[3394]: I1213 01:27:54.463938 3394 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:27:54.498966 kubelet[3394]: I1213 01:27:54.498771 3394 topology_manager.go:215] "Topology Admit Handler" podUID="a0b1e275-aee8-439d-8c62-c13218164336" podNamespace="kube-system" podName="coredns-76f75df574-jxx94" Dec 13 01:27:54.513470 kubelet[3394]: I1213 01:27:54.513244 3394 topology_manager.go:215] "Topology Admit Handler" podUID="651f7de4-1018-4796-9ee1-0431fc6b6c1b" podNamespace="kube-system" podName="coredns-76f75df574-wrxgz" Dec 13 01:27:54.590467 kubelet[3394]: I1213 01:27:54.590354 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8ztq\" (UniqueName: \"kubernetes.io/projected/a0b1e275-aee8-439d-8c62-c13218164336-kube-api-access-l8ztq\") pod \"coredns-76f75df574-jxx94\" (UID: \"a0b1e275-aee8-439d-8c62-c13218164336\") " pod="kube-system/coredns-76f75df574-jxx94" Dec 13 01:27:54.590467 kubelet[3394]: I1213 01:27:54.590434 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/651f7de4-1018-4796-9ee1-0431fc6b6c1b-config-volume\") pod \"coredns-76f75df574-wrxgz\" (UID: \"651f7de4-1018-4796-9ee1-0431fc6b6c1b\") " pod="kube-system/coredns-76f75df574-wrxgz" Dec 13 01:27:54.590467 kubelet[3394]: I1213 01:27:54.590456 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99hr4\" (UniqueName: \"kubernetes.io/projected/651f7de4-1018-4796-9ee1-0431fc6b6c1b-kube-api-access-99hr4\") pod \"coredns-76f75df574-wrxgz\" (UID: \"651f7de4-1018-4796-9ee1-0431fc6b6c1b\") " pod="kube-system/coredns-76f75df574-wrxgz" Dec 13 01:27:54.590653 kubelet[3394]: I1213 01:27:54.590507 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0b1e275-aee8-439d-8c62-c13218164336-config-volume\") pod \"coredns-76f75df574-jxx94\" (UID: \"a0b1e275-aee8-439d-8c62-c13218164336\") " pod="kube-system/coredns-76f75df574-jxx94" Dec 13 01:27:54.811554 containerd[1829]: time="2024-12-13T01:27:54.811444786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jxx94,Uid:a0b1e275-aee8-439d-8c62-c13218164336,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:54.820486 containerd[1829]: time="2024-12-13T01:27:54.820236356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wrxgz,Uid:651f7de4-1018-4796-9ee1-0431fc6b6c1b,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:55.167494 kubelet[3394]: I1213 01:27:55.167186 3394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zbmtk" podStartSLOduration=7.734951458 podStartE2EDuration="15.16714397s" podCreationTimestamp="2024-12-13 01:27:40 +0000 UTC" firstStartedPulling="2024-12-13 01:27:41.373525794 +0000 UTC m=+14.489242452" lastFinishedPulling="2024-12-13 01:27:48.805718306 +0000 UTC m=+21.921434964" observedRunningTime="2024-12-13 01:27:55.165886815 +0000 UTC m=+28.281603473" watchObservedRunningTime="2024-12-13 01:27:55.16714397 +0000 UTC m=+28.282860628" Dec 13 01:27:57.443196 systemd-networkd[1370]: cilium_host: Link UP Dec 13 01:27:57.444189 systemd-networkd[1370]: cilium_net: Link UP Dec 13 01:27:57.444270 systemd-networkd[1370]: cilium_net: Gained carrier Dec 13 01:27:57.444434 systemd-networkd[1370]: cilium_host: Gained carrier Dec 13 01:27:57.444560 systemd-networkd[1370]: cilium_host: Gained IPv6LL Dec 13 01:27:57.570690 systemd-networkd[1370]: cilium_vxlan: Link UP Dec 13 01:27:57.570696 systemd-networkd[1370]: cilium_vxlan: Gained carrier Dec 13 01:27:57.784285 systemd-networkd[1370]: cilium_net: Gained IPv6LL Dec 13 01:27:58.088080 kernel: NET: Registered PF_ALG protocol family Dec 13 01:27:58.769244 systemd-networkd[1370]: cilium_vxlan: Gained IPv6LL Dec 13 01:27:58.921203 systemd-networkd[1370]: lxc_health: Link UP Dec 13 01:27:58.934350 systemd-networkd[1370]: lxc_health: Gained carrier Dec 13 01:27:59.395698 systemd-networkd[1370]: lxc8682c20bbf2d: Link UP Dec 13 01:27:59.416075 kernel: eth0: renamed from tmpa1928 Dec 13 01:27:59.416593 systemd-networkd[1370]: lxc8682c20bbf2d: Gained carrier Dec 13 01:27:59.428117 systemd-networkd[1370]: lxc4f6cbe53c256: Link UP Dec 13 01:27:59.452069 kernel: eth0: renamed from tmp3caa1 Dec 13 01:27:59.458484 systemd-networkd[1370]: lxc4f6cbe53c256: Gained carrier Dec 13 01:28:00.369217 systemd-networkd[1370]: lxc_health: Gained IPv6LL Dec 13 01:28:00.816256 systemd-networkd[1370]: lxc4f6cbe53c256: Gained IPv6LL Dec 13 01:28:01.156272 systemd-networkd[1370]: lxc8682c20bbf2d: Gained IPv6LL Dec 13 01:28:03.081321 containerd[1829]: time="2024-12-13T01:28:03.080441737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:03.081321 containerd[1829]: time="2024-12-13T01:28:03.080505017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:03.081321 containerd[1829]: time="2024-12-13T01:28:03.080522457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:03.081321 containerd[1829]: time="2024-12-13T01:28:03.080636936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:03.120487 containerd[1829]: time="2024-12-13T01:28:03.119813562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:03.120487 containerd[1829]: time="2024-12-13T01:28:03.119867002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:03.120487 containerd[1829]: time="2024-12-13T01:28:03.119885402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:03.122371 containerd[1829]: time="2024-12-13T01:28:03.121668276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:03.151738 systemd[1]: run-containerd-runc-k8s.io-a192810336050cd49f925e5aa40127c586ab100a3d77cfccd4189a5607f34a11-runc.RBBWTo.mount: Deactivated successfully. Dec 13 01:28:03.199607 containerd[1829]: time="2024-12-13T01:28:03.199259770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wrxgz,Uid:651f7de4-1018-4796-9ee1-0431fc6b6c1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3caa195de829a7aee1bb153ee42a3da6c0306e020c1a31f9ba4194d49ac8322b\"" Dec 13 01:28:03.200616 containerd[1829]: time="2024-12-13T01:28:03.200587726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jxx94,Uid:a0b1e275-aee8-439d-8c62-c13218164336,Namespace:kube-system,Attempt:0,} returns sandbox id \"a192810336050cd49f925e5aa40127c586ab100a3d77cfccd4189a5607f34a11\"" Dec 13 01:28:03.207559 containerd[1829]: time="2024-12-13T01:28:03.207324183Z" level=info msg="CreateContainer within sandbox \"3caa195de829a7aee1bb153ee42a3da6c0306e020c1a31f9ba4194d49ac8322b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:28:03.214242 containerd[1829]: time="2024-12-13T01:28:03.213117723Z" level=info msg="CreateContainer within sandbox \"a192810336050cd49f925e5aa40127c586ab100a3d77cfccd4189a5607f34a11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:28:03.248344 containerd[1829]: time="2024-12-13T01:28:03.248294683Z" level=info msg="CreateContainer within sandbox \"3caa195de829a7aee1bb153ee42a3da6c0306e020c1a31f9ba4194d49ac8322b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"70d896d50f5d7f2979a802e660b1d9f6d52bc096a1ed12cdebe50c364c2a56ad\"" Dec 13 01:28:03.249073 containerd[1829]: time="2024-12-13T01:28:03.248983200Z" level=info msg="StartContainer for \"70d896d50f5d7f2979a802e660b1d9f6d52bc096a1ed12cdebe50c364c2a56ad\"" Dec 13 01:28:03.264092 containerd[1829]: time="2024-12-13T01:28:03.263911869Z" level=info msg="CreateContainer within sandbox \"a192810336050cd49f925e5aa40127c586ab100a3d77cfccd4189a5607f34a11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c65f8a28ff83e9abac682250afa1594ca809ffee5899d4960da4a8a49a08b64\"" Dec 13 01:28:03.268251 containerd[1829]: time="2024-12-13T01:28:03.267234098Z" level=info msg="StartContainer for \"1c65f8a28ff83e9abac682250afa1594ca809ffee5899d4960da4a8a49a08b64\"" Dec 13 01:28:03.315350 containerd[1829]: time="2024-12-13T01:28:03.315303013Z" level=info msg="StartContainer for \"70d896d50f5d7f2979a802e660b1d9f6d52bc096a1ed12cdebe50c364c2a56ad\" returns successfully" Dec 13 01:28:03.334312 containerd[1829]: time="2024-12-13T01:28:03.333511271Z" level=info msg="StartContainer for \"1c65f8a28ff83e9abac682250afa1594ca809ffee5899d4960da4a8a49a08b64\" returns successfully" Dec 13 01:28:04.156866 kubelet[3394]: I1213 01:28:04.156825 3394 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:04.195165 kubelet[3394]: I1213 01:28:04.194957 3394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-jxx94" podStartSLOduration=23.194902789 podStartE2EDuration="23.194902789s" podCreationTimestamp="2024-12-13 01:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:04.193658234 +0000 UTC m=+37.309374892" watchObservedRunningTime="2024-12-13 01:28:04.194902789 +0000 UTC m=+37.310619447" Dec 13 01:28:04.221742 kubelet[3394]: I1213 01:28:04.220461 3394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-wrxgz" podStartSLOduration=23.220419861 podStartE2EDuration="23.220419861s" podCreationTimestamp="2024-12-13 01:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:04.220293542 +0000 UTC m=+37.336010200" watchObservedRunningTime="2024-12-13 01:28:04.220419861 +0000 UTC m=+37.336136519" Dec 13 01:29:19.411394 systemd[1]: Started sshd@7-10.200.20.42:22-10.200.16.10:43292.service - OpenSSH per-connection server daemon (10.200.16.10:43292). Dec 13 01:29:19.836158 sshd[4769]: Accepted publickey for core from 10.200.16.10 port 43292 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:19.837653 sshd[4769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:19.841444 systemd-logind[1774]: New session 10 of user core. Dec 13 01:29:19.847350 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:29:20.241753 sshd[4769]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:20.245431 systemd[1]: sshd@7-10.200.20.42:22-10.200.16.10:43292.service: Deactivated successfully. Dec 13 01:29:20.247924 systemd-logind[1774]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:29:20.248435 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:29:20.250828 systemd-logind[1774]: Removed session 10. Dec 13 01:29:25.318314 systemd[1]: Started sshd@8-10.200.20.42:22-10.200.16.10:43298.service - OpenSSH per-connection server daemon (10.200.16.10:43298). Dec 13 01:29:25.748792 sshd[4784]: Accepted publickey for core from 10.200.16.10 port 43298 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:25.750277 sshd[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:25.754693 systemd-logind[1774]: New session 11 of user core. Dec 13 01:29:25.759306 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:29:26.125155 sshd[4784]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:26.128745 systemd[1]: sshd@8-10.200.20.42:22-10.200.16.10:43298.service: Deactivated successfully. Dec 13 01:29:26.131899 systemd-logind[1774]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:29:26.132038 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:29:26.134355 systemd-logind[1774]: Removed session 11. Dec 13 01:29:31.202298 systemd[1]: Started sshd@9-10.200.20.42:22-10.200.16.10:45968.service - OpenSSH per-connection server daemon (10.200.16.10:45968). Dec 13 01:29:31.625110 sshd[4801]: Accepted publickey for core from 10.200.16.10 port 45968 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:31.626454 sshd[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:31.630834 systemd-logind[1774]: New session 12 of user core. Dec 13 01:29:31.637326 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:29:32.021304 sshd[4801]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:32.024606 systemd[1]: sshd@9-10.200.20.42:22-10.200.16.10:45968.service: Deactivated successfully. Dec 13 01:29:32.028304 systemd-logind[1774]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:29:32.029169 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:29:32.030795 systemd-logind[1774]: Removed session 12. Dec 13 01:29:37.099326 systemd[1]: Started sshd@10-10.200.20.42:22-10.200.16.10:45976.service - OpenSSH per-connection server daemon (10.200.16.10:45976). Dec 13 01:29:37.542814 sshd[4816]: Accepted publickey for core from 10.200.16.10 port 45976 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:37.543416 sshd[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:37.547805 systemd-logind[1774]: New session 13 of user core. Dec 13 01:29:37.550347 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:29:37.926208 sshd[4816]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:37.931415 systemd-logind[1774]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:29:37.932222 systemd[1]: sshd@10-10.200.20.42:22-10.200.16.10:45976.service: Deactivated successfully. Dec 13 01:29:37.935073 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:29:37.937180 systemd-logind[1774]: Removed session 13. Dec 13 01:29:38.003320 systemd[1]: Started sshd@11-10.200.20.42:22-10.200.16.10:45982.service - OpenSSH per-connection server daemon (10.200.16.10:45982). Dec 13 01:29:38.446338 sshd[4830]: Accepted publickey for core from 10.200.16.10 port 45982 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:38.447632 sshd[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:38.451782 systemd-logind[1774]: New session 14 of user core. Dec 13 01:29:38.459446 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:29:38.861446 sshd[4830]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:38.864955 systemd[1]: sshd@11-10.200.20.42:22-10.200.16.10:45982.service: Deactivated successfully. Dec 13 01:29:38.868668 systemd-logind[1774]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:29:38.869028 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:29:38.870740 systemd-logind[1774]: Removed session 14. Dec 13 01:29:38.935326 systemd[1]: Started sshd@12-10.200.20.42:22-10.200.16.10:36738.service - OpenSSH per-connection server daemon (10.200.16.10:36738). Dec 13 01:29:39.358948 sshd[4842]: Accepted publickey for core from 10.200.16.10 port 36738 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:39.360687 sshd[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:39.365004 systemd-logind[1774]: New session 15 of user core. Dec 13 01:29:39.368374 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:29:39.748156 sshd[4842]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:39.751914 systemd[1]: sshd@12-10.200.20.42:22-10.200.16.10:36738.service: Deactivated successfully. Dec 13 01:29:39.756318 systemd-logind[1774]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:29:39.756962 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:29:39.758827 systemd-logind[1774]: Removed session 15. Dec 13 01:29:44.827268 systemd[1]: Started sshd@13-10.200.20.42:22-10.200.16.10:36746.service - OpenSSH per-connection server daemon (10.200.16.10:36746). Dec 13 01:29:45.268604 sshd[4858]: Accepted publickey for core from 10.200.16.10 port 36746 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:45.269906 sshd[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:45.274026 systemd-logind[1774]: New session 16 of user core. Dec 13 01:29:45.281348 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:29:45.650441 sshd[4858]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:45.654459 systemd[1]: sshd@13-10.200.20.42:22-10.200.16.10:36746.service: Deactivated successfully. Dec 13 01:29:45.654712 systemd-logind[1774]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:29:45.657649 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:29:45.659278 systemd-logind[1774]: Removed session 16. Dec 13 01:29:50.729310 systemd[1]: Started sshd@14-10.200.20.42:22-10.200.16.10:55152.service - OpenSSH per-connection server daemon (10.200.16.10:55152). Dec 13 01:29:51.171966 sshd[4871]: Accepted publickey for core from 10.200.16.10 port 55152 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:51.173339 sshd[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:51.177875 systemd-logind[1774]: New session 17 of user core. Dec 13 01:29:51.183314 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:29:51.554140 sshd[4871]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:51.557510 systemd[1]: sshd@14-10.200.20.42:22-10.200.16.10:55152.service: Deactivated successfully. Dec 13 01:29:51.561003 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:29:51.562825 systemd-logind[1774]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:29:51.563981 systemd-logind[1774]: Removed session 17. Dec 13 01:29:51.630308 systemd[1]: Started sshd@15-10.200.20.42:22-10.200.16.10:55166.service - OpenSSH per-connection server daemon (10.200.16.10:55166). Dec 13 01:29:52.051959 sshd[4885]: Accepted publickey for core from 10.200.16.10 port 55166 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:52.053296 sshd[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:52.057013 systemd-logind[1774]: New session 18 of user core. Dec 13 01:29:52.065266 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:29:52.484250 sshd[4885]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:52.487662 systemd[1]: sshd@15-10.200.20.42:22-10.200.16.10:55166.service: Deactivated successfully. Dec 13 01:29:52.492545 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:29:52.493499 systemd-logind[1774]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:29:52.494683 systemd-logind[1774]: Removed session 18. Dec 13 01:29:52.559516 systemd[1]: Started sshd@16-10.200.20.42:22-10.200.16.10:55182.service - OpenSSH per-connection server daemon (10.200.16.10:55182). Dec 13 01:29:52.982805 sshd[4897]: Accepted publickey for core from 10.200.16.10 port 55182 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:52.984170 sshd[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:52.987979 systemd-logind[1774]: New session 19 of user core. Dec 13 01:29:52.994348 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:29:54.552198 sshd[4897]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:54.556483 systemd-logind[1774]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:29:54.557003 systemd[1]: sshd@16-10.200.20.42:22-10.200.16.10:55182.service: Deactivated successfully. Dec 13 01:29:54.559285 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:29:54.561480 systemd-logind[1774]: Removed session 19. Dec 13 01:29:54.640368 systemd[1]: Started sshd@17-10.200.20.42:22-10.200.16.10:55186.service - OpenSSH per-connection server daemon (10.200.16.10:55186). Dec 13 01:29:55.083496 sshd[4916]: Accepted publickey for core from 10.200.16.10 port 55186 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:55.086266 sshd[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:55.092816 systemd-logind[1774]: New session 20 of user core. Dec 13 01:29:55.097327 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:29:55.579184 sshd[4916]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:55.583510 systemd-logind[1774]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:29:55.583755 systemd[1]: sshd@17-10.200.20.42:22-10.200.16.10:55186.service: Deactivated successfully. Dec 13 01:29:55.586527 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:29:55.588123 systemd-logind[1774]: Removed session 20. Dec 13 01:29:55.657282 systemd[1]: Started sshd@18-10.200.20.42:22-10.200.16.10:55188.service - OpenSSH per-connection server daemon (10.200.16.10:55188). Dec 13 01:29:56.079556 sshd[4927]: Accepted publickey for core from 10.200.16.10 port 55188 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:56.080936 sshd[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:56.084793 systemd-logind[1774]: New session 21 of user core. Dec 13 01:29:56.090244 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:29:56.469179 sshd[4927]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:56.472523 systemd[1]: sshd@18-10.200.20.42:22-10.200.16.10:55188.service: Deactivated successfully. Dec 13 01:29:56.475160 systemd-logind[1774]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:29:56.475555 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:29:56.476752 systemd-logind[1774]: Removed session 21. Dec 13 01:30:01.546284 systemd[1]: Started sshd@19-10.200.20.42:22-10.200.16.10:43210.service - OpenSSH per-connection server daemon (10.200.16.10:43210). Dec 13 01:30:01.980314 sshd[4944]: Accepted publickey for core from 10.200.16.10 port 43210 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:01.981656 sshd[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:01.986322 systemd-logind[1774]: New session 22 of user core. Dec 13 01:30:01.989380 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:30:02.355331 sshd[4944]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:02.358712 systemd[1]: sshd@19-10.200.20.42:22-10.200.16.10:43210.service: Deactivated successfully. Dec 13 01:30:02.361939 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:30:02.363636 systemd-logind[1774]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:30:02.364577 systemd-logind[1774]: Removed session 22. Dec 13 01:30:07.437214 systemd[1]: Started sshd@20-10.200.20.42:22-10.200.16.10:43220.service - OpenSSH per-connection server daemon (10.200.16.10:43220). Dec 13 01:30:07.880869 sshd[4958]: Accepted publickey for core from 10.200.16.10 port 43220 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:07.882274 sshd[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:07.886171 systemd-logind[1774]: New session 23 of user core. Dec 13 01:30:07.895297 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:30:08.265278 sshd[4958]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:08.269263 systemd-logind[1774]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:30:08.270389 systemd[1]: sshd@20-10.200.20.42:22-10.200.16.10:43220.service: Deactivated successfully. Dec 13 01:30:08.273960 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:30:08.275193 systemd-logind[1774]: Removed session 23. Dec 13 01:30:13.342289 systemd[1]: Started sshd@21-10.200.20.42:22-10.200.16.10:46214.service - OpenSSH per-connection server daemon (10.200.16.10:46214). Dec 13 01:30:13.766567 sshd[4975]: Accepted publickey for core from 10.200.16.10 port 46214 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:13.767910 sshd[4975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:13.771778 systemd-logind[1774]: New session 24 of user core. Dec 13 01:30:13.779355 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:30:14.155284 sshd[4975]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:14.159190 systemd[1]: sshd@21-10.200.20.42:22-10.200.16.10:46214.service: Deactivated successfully. Dec 13 01:30:14.161877 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:30:14.162238 systemd-logind[1774]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:30:14.163753 systemd-logind[1774]: Removed session 24. Dec 13 01:30:14.232317 systemd[1]: Started sshd@22-10.200.20.42:22-10.200.16.10:46224.service - OpenSSH per-connection server daemon (10.200.16.10:46224). Dec 13 01:30:14.662126 sshd[4989]: Accepted publickey for core from 10.200.16.10 port 46224 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:14.663433 sshd[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:14.668644 systemd-logind[1774]: New session 25 of user core. Dec 13 01:30:14.673360 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:30:18.430309 containerd[1829]: time="2024-12-13T01:30:18.429890093Z" level=info msg="StopContainer for \"2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a\" with timeout 30 (s)" Dec 13 01:30:18.430698 containerd[1829]: time="2024-12-13T01:30:18.430416371Z" level=info msg="Stop container \"2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a\" with signal terminated" Dec 13 01:30:18.439129 containerd[1829]: time="2024-12-13T01:30:18.439060062Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:30:18.450531 containerd[1829]: time="2024-12-13T01:30:18.450381504Z" level=info msg="StopContainer for \"e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea\" with timeout 2 (s)" Dec 13 01:30:18.451113 containerd[1829]: time="2024-12-13T01:30:18.450960022Z" level=info msg="Stop container \"e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea\" with signal terminated" Dec 13 01:30:18.460025 systemd-networkd[1370]: lxc_health: Link DOWN Dec 13 01:30:18.460038 systemd-networkd[1370]: lxc_health: Lost carrier Dec 13 01:30:18.472415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a-rootfs.mount: Deactivated successfully. Dec 13 01:30:18.499911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea-rootfs.mount: Deactivated successfully. Dec 13 01:30:18.537587 containerd[1829]: time="2024-12-13T01:30:18.537346291Z" level=info msg="shim disconnected" id=e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea namespace=k8s.io Dec 13 01:30:18.537587 containerd[1829]: time="2024-12-13T01:30:18.537509331Z" level=warning msg="cleaning up after shim disconnected" id=e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea namespace=k8s.io Dec 13 01:30:18.537587 containerd[1829]: time="2024-12-13T01:30:18.537520011Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:18.538466 containerd[1829]: time="2024-12-13T01:30:18.538256208Z" level=info msg="shim disconnected" id=2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a namespace=k8s.io Dec 13 01:30:18.538466 containerd[1829]: time="2024-12-13T01:30:18.538321888Z" level=warning msg="cleaning up after shim disconnected" id=2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a namespace=k8s.io Dec 13 01:30:18.538466 containerd[1829]: time="2024-12-13T01:30:18.538334088Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:18.557331 containerd[1829]: time="2024-12-13T01:30:18.557284864Z" level=info msg="StopContainer for \"2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a\" returns successfully" Dec 13 01:30:18.558521 containerd[1829]: time="2024-12-13T01:30:18.558417820Z" level=info msg="StopPodSandbox for \"f4ebf63becae835817b527149e9a0cf3cba5e37068f180ca978b3c13ef79712c\"" Dec 13 01:30:18.559632 containerd[1829]: time="2024-12-13T01:30:18.559372617Z" level=info msg="Container to stop \"2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:30:18.562135 containerd[1829]: time="2024-12-13T01:30:18.561803609Z" level=info msg="StopContainer for \"e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea\" returns successfully" Dec 13 01:30:18.562278 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f4ebf63becae835817b527149e9a0cf3cba5e37068f180ca978b3c13ef79712c-shm.mount: Deactivated successfully. Dec 13 01:30:18.564840 containerd[1829]: time="2024-12-13T01:30:18.564509760Z" level=info msg="StopPodSandbox for \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\"" Dec 13 01:30:18.564840 containerd[1829]: time="2024-12-13T01:30:18.564551840Z" level=info msg="Container to stop \"f3eec152724c6789bd9e9f6dac80553ecd3549b3bce4438c1130a25cfade2811\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:30:18.564840 containerd[1829]: time="2024-12-13T01:30:18.564566919Z" level=info msg="Container to stop \"92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:30:18.564840 containerd[1829]: time="2024-12-13T01:30:18.564577319Z" level=info msg="Container to stop \"dfed549af4c863b403ad68e14453863f22214e5cb55fb801c8e8134f4f6aaf49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:30:18.564840 containerd[1829]: time="2024-12-13T01:30:18.564588159Z" level=info msg="Container to stop \"a6d5df5daaa2ea48021fb85cf1a228540de0c92e1fe662a668340e537cc10a3a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:30:18.564840 containerd[1829]: time="2024-12-13T01:30:18.564598839Z" level=info msg="Container to stop \"e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:30:18.569685 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6-shm.mount: Deactivated successfully. Dec 13 01:30:18.608874 containerd[1829]: time="2024-12-13T01:30:18.608786531Z" level=info msg="shim disconnected" id=f4ebf63becae835817b527149e9a0cf3cba5e37068f180ca978b3c13ef79712c namespace=k8s.io Dec 13 01:30:18.609406 containerd[1829]: time="2024-12-13T01:30:18.608842570Z" level=warning msg="cleaning up after shim disconnected" id=f4ebf63becae835817b527149e9a0cf3cba5e37068f180ca978b3c13ef79712c namespace=k8s.io Dec 13 01:30:18.609406 containerd[1829]: time="2024-12-13T01:30:18.609134129Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:18.609406 containerd[1829]: time="2024-12-13T01:30:18.609216369Z" level=info msg="shim disconnected" id=cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6 namespace=k8s.io Dec 13 01:30:18.609406 containerd[1829]: time="2024-12-13T01:30:18.609252329Z" level=warning msg="cleaning up after shim disconnected" id=cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6 namespace=k8s.io Dec 13 01:30:18.609406 containerd[1829]: time="2024-12-13T01:30:18.609259929Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:18.623828 containerd[1829]: time="2024-12-13T01:30:18.623776280Z" level=info msg="TearDown network for sandbox \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\" successfully" Dec 13 01:30:18.624143 containerd[1829]: time="2024-12-13T01:30:18.623971439Z" level=info msg="StopPodSandbox for \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\" returns successfully" Dec 13 01:30:18.624316 containerd[1829]: time="2024-12-13T01:30:18.624275238Z" level=info msg="TearDown network for sandbox \"f4ebf63becae835817b527149e9a0cf3cba5e37068f180ca978b3c13ef79712c\" successfully" Dec 13 01:30:18.624316 containerd[1829]: time="2024-12-13T01:30:18.624295518Z" level=info msg="StopPodSandbox for \"f4ebf63becae835817b527149e9a0cf3cba5e37068f180ca978b3c13ef79712c\" returns successfully" Dec 13 01:30:18.813743 kubelet[3394]: I1213 01:30:18.813618 3394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-host-proc-sys-kernel\") pod \"c16c7b09-3da7-4f50-8901-b2eaa675c671\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " Dec 13 01:30:18.813743 kubelet[3394]: I1213 01:30:18.813679 3394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c16c7b09-3da7-4f50-8901-b2eaa675c671-clustermesh-secrets\") pod \"c16c7b09-3da7-4f50-8901-b2eaa675c671\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " Dec 13 01:30:18.813743 kubelet[3394]: I1213 01:30:18.813699 3394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-xtables-lock\") pod \"c16c7b09-3da7-4f50-8901-b2eaa675c671\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " Dec 13 01:30:18.813743 kubelet[3394]: I1213 01:30:18.813719 3394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-cilium-cgroup\") pod \"c16c7b09-3da7-4f50-8901-b2eaa675c671\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " Dec 13 01:30:18.813743 kubelet[3394]: I1213 01:30:18.813741 3394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsm4p\" (UniqueName: \"kubernetes.io/projected/c16c7b09-3da7-4f50-8901-b2eaa675c671-kube-api-access-jsm4p\") pod \"c16c7b09-3da7-4f50-8901-b2eaa675c671\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " Dec 13 01:30:18.815018 kubelet[3394]: I1213 01:30:18.813758 3394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-bpf-maps\") pod \"c16c7b09-3da7-4f50-8901-b2eaa675c671\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " Dec 13 01:30:18.815018 kubelet[3394]: I1213 01:30:18.813776 3394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-lib-modules\") pod \"c16c7b09-3da7-4f50-8901-b2eaa675c671\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " Dec 13 01:30:18.815018 kubelet[3394]: I1213 01:30:18.813800 3394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v86m\" (UniqueName: \"kubernetes.io/projected/065264e6-568a-4535-9c70-1df4badd6557-kube-api-access-4v86m\") pod \"065264e6-568a-4535-9c70-1df4badd6557\" (UID: \"065264e6-568a-4535-9c70-1df4badd6557\") " Dec 13 01:30:18.815018 kubelet[3394]: I1213 01:30:18.813818 3394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-host-proc-sys-net\") pod \"c16c7b09-3da7-4f50-8901-b2eaa675c671\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " Dec 13 01:30:18.815018 kubelet[3394]: I1213 01:30:18.813838 3394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c16c7b09-3da7-4f50-8901-b2eaa675c671-cilium-config-path\") pod \"c16c7b09-3da7-4f50-8901-b2eaa675c671\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " Dec 13 01:30:18.815018 kubelet[3394]: I1213 01:30:18.813854 3394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-hostproc\") pod \"c16c7b09-3da7-4f50-8901-b2eaa675c671\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " Dec 13 01:30:18.815734 kubelet[3394]: I1213 01:30:18.813873 3394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c16c7b09-3da7-4f50-8901-b2eaa675c671-hubble-tls\") pod \"c16c7b09-3da7-4f50-8901-b2eaa675c671\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " Dec 13 01:30:18.815734 kubelet[3394]: I1213 01:30:18.813892 3394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/065264e6-568a-4535-9c70-1df4badd6557-cilium-config-path\") pod \"065264e6-568a-4535-9c70-1df4badd6557\" (UID: \"065264e6-568a-4535-9c70-1df4badd6557\") " Dec 13 01:30:18.815734 kubelet[3394]: I1213 01:30:18.813935 3394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-etc-cni-netd\") pod \"c16c7b09-3da7-4f50-8901-b2eaa675c671\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " Dec 13 01:30:18.815734 kubelet[3394]: I1213 01:30:18.813952 3394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-cni-path\") pod \"c16c7b09-3da7-4f50-8901-b2eaa675c671\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " Dec 13 01:30:18.815734 kubelet[3394]: I1213 01:30:18.813969 3394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-cilium-run\") pod \"c16c7b09-3da7-4f50-8901-b2eaa675c671\" (UID: \"c16c7b09-3da7-4f50-8901-b2eaa675c671\") " Dec 13 01:30:18.815734 kubelet[3394]: I1213 01:30:18.814035 3394 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c16c7b09-3da7-4f50-8901-b2eaa675c671" (UID: "c16c7b09-3da7-4f50-8901-b2eaa675c671"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:18.815875 kubelet[3394]: I1213 01:30:18.814860 3394 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c16c7b09-3da7-4f50-8901-b2eaa675c671" (UID: "c16c7b09-3da7-4f50-8901-b2eaa675c671"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:18.815875 kubelet[3394]: I1213 01:30:18.815375 3394 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-hostproc" (OuterVolumeSpecName: "hostproc") pod "c16c7b09-3da7-4f50-8901-b2eaa675c671" (UID: "c16c7b09-3da7-4f50-8901-b2eaa675c671"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:18.816075 kubelet[3394]: I1213 01:30:18.815958 3394 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c16c7b09-3da7-4f50-8901-b2eaa675c671" (UID: "c16c7b09-3da7-4f50-8901-b2eaa675c671"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:18.818125 kubelet[3394]: I1213 01:30:18.818086 3394 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c16c7b09-3da7-4f50-8901-b2eaa675c671" (UID: "c16c7b09-3da7-4f50-8901-b2eaa675c671"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:18.818219 kubelet[3394]: I1213 01:30:18.818140 3394 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c16c7b09-3da7-4f50-8901-b2eaa675c671" (UID: "c16c7b09-3da7-4f50-8901-b2eaa675c671"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:18.820095 kubelet[3394]: I1213 01:30:18.819823 3394 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c16c7b09-3da7-4f50-8901-b2eaa675c671" (UID: "c16c7b09-3da7-4f50-8901-b2eaa675c671"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:18.820095 kubelet[3394]: I1213 01:30:18.819870 3394 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c16c7b09-3da7-4f50-8901-b2eaa675c671" (UID: "c16c7b09-3da7-4f50-8901-b2eaa675c671"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:18.820386 kubelet[3394]: I1213 01:30:18.820265 3394 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c16c7b09-3da7-4f50-8901-b2eaa675c671" (UID: "c16c7b09-3da7-4f50-8901-b2eaa675c671"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:18.820386 kubelet[3394]: I1213 01:30:18.820312 3394 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-cni-path" (OuterVolumeSpecName: "cni-path") pod "c16c7b09-3da7-4f50-8901-b2eaa675c671" (UID: "c16c7b09-3da7-4f50-8901-b2eaa675c671"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:18.821265 kubelet[3394]: I1213 01:30:18.821243 3394 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c16c7b09-3da7-4f50-8901-b2eaa675c671-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c16c7b09-3da7-4f50-8901-b2eaa675c671" (UID: "c16c7b09-3da7-4f50-8901-b2eaa675c671"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:30:18.821550 kubelet[3394]: I1213 01:30:18.821498 3394 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/065264e6-568a-4535-9c70-1df4badd6557-kube-api-access-4v86m" (OuterVolumeSpecName: "kube-api-access-4v86m") pod "065264e6-568a-4535-9c70-1df4badd6557" (UID: "065264e6-568a-4535-9c70-1df4badd6557"). InnerVolumeSpecName "kube-api-access-4v86m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:30:18.821626 kubelet[3394]: I1213 01:30:18.821592 3394 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16c7b09-3da7-4f50-8901-b2eaa675c671-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c16c7b09-3da7-4f50-8901-b2eaa675c671" (UID: "c16c7b09-3da7-4f50-8901-b2eaa675c671"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:30:18.822087 kubelet[3394]: I1213 01:30:18.822066 3394 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065264e6-568a-4535-9c70-1df4badd6557-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "065264e6-568a-4535-9c70-1df4badd6557" (UID: "065264e6-568a-4535-9c70-1df4badd6557"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:30:18.822177 kubelet[3394]: I1213 01:30:18.822040 3394 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c16c7b09-3da7-4f50-8901-b2eaa675c671-kube-api-access-jsm4p" (OuterVolumeSpecName: "kube-api-access-jsm4p") pod "c16c7b09-3da7-4f50-8901-b2eaa675c671" (UID: "c16c7b09-3da7-4f50-8901-b2eaa675c671"). InnerVolumeSpecName "kube-api-access-jsm4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:30:18.823097 kubelet[3394]: I1213 01:30:18.823040 3394 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c16c7b09-3da7-4f50-8901-b2eaa675c671-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c16c7b09-3da7-4f50-8901-b2eaa675c671" (UID: "c16c7b09-3da7-4f50-8901-b2eaa675c671"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:30:18.915184 kubelet[3394]: I1213 01:30:18.915139 3394 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-etc-cni-netd\") on node \"ci-4081.2.1-a-16a3da9678\" DevicePath \"\"" Dec 13 01:30:18.915184 kubelet[3394]: I1213 01:30:18.915180 3394 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-cni-path\") on node \"ci-4081.2.1-a-16a3da9678\" DevicePath \"\"" Dec 13 01:30:18.915184 kubelet[3394]: I1213 01:30:18.915192 3394 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-cilium-run\") on node \"ci-4081.2.1-a-16a3da9678\" DevicePath \"\"" Dec 13 01:30:18.915184 kubelet[3394]: I1213 01:30:18.915201 3394 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c16c7b09-3da7-4f50-8901-b2eaa675c671-hubble-tls\") on node \"ci-4081.2.1-a-16a3da9678\" DevicePath \"\"" Dec 13 01:30:18.915394 kubelet[3394]: I1213 01:30:18.915213 3394 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/065264e6-568a-4535-9c70-1df4badd6557-cilium-config-path\") on node \"ci-4081.2.1-a-16a3da9678\" DevicePath \"\"" Dec 13 01:30:18.915394 kubelet[3394]: I1213 01:30:18.915224 3394 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-host-proc-sys-kernel\") on node \"ci-4081.2.1-a-16a3da9678\" DevicePath \"\"" Dec 13 01:30:18.915394 kubelet[3394]: I1213 01:30:18.915235 3394 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c16c7b09-3da7-4f50-8901-b2eaa675c671-clustermesh-secrets\") on node \"ci-4081.2.1-a-16a3da9678\" DevicePath \"\"" Dec 13 01:30:18.915394 kubelet[3394]: I1213 01:30:18.915245 3394 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-xtables-lock\") on node \"ci-4081.2.1-a-16a3da9678\" DevicePath \"\"" Dec 13 01:30:18.915394 kubelet[3394]: I1213 01:30:18.915255 3394 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-cilium-cgroup\") on node \"ci-4081.2.1-a-16a3da9678\" DevicePath \"\"" Dec 13 01:30:18.915394 kubelet[3394]: I1213 01:30:18.915264 3394 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jsm4p\" (UniqueName: \"kubernetes.io/projected/c16c7b09-3da7-4f50-8901-b2eaa675c671-kube-api-access-jsm4p\") on node \"ci-4081.2.1-a-16a3da9678\" DevicePath \"\"" Dec 13 01:30:18.915394 kubelet[3394]: I1213 01:30:18.915274 3394 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-bpf-maps\") on node \"ci-4081.2.1-a-16a3da9678\" DevicePath \"\"" Dec 13 01:30:18.915394 kubelet[3394]: I1213 01:30:18.915284 3394 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-lib-modules\") on node \"ci-4081.2.1-a-16a3da9678\" DevicePath \"\"" Dec 13 01:30:18.915559 kubelet[3394]: I1213 01:30:18.915297 3394 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4v86m\" (UniqueName: \"kubernetes.io/projected/065264e6-568a-4535-9c70-1df4badd6557-kube-api-access-4v86m\") on node \"ci-4081.2.1-a-16a3da9678\" DevicePath \"\"" Dec 13 01:30:18.915559 kubelet[3394]: I1213 01:30:18.915309 3394 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-host-proc-sys-net\") on node \"ci-4081.2.1-a-16a3da9678\" DevicePath \"\"" Dec 13 01:30:18.915559 kubelet[3394]: I1213 01:30:18.915319 3394 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c16c7b09-3da7-4f50-8901-b2eaa675c671-cilium-config-path\") on node \"ci-4081.2.1-a-16a3da9678\" DevicePath \"\"" Dec 13 01:30:18.915559 kubelet[3394]: I1213 01:30:18.915329 3394 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c16c7b09-3da7-4f50-8901-b2eaa675c671-hostproc\") on node \"ci-4081.2.1-a-16a3da9678\" DevicePath \"\"" Dec 13 01:30:19.415481 kubelet[3394]: I1213 01:30:19.415147 3394 scope.go:117] "RemoveContainer" containerID="2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a" Dec 13 01:30:19.418757 containerd[1829]: time="2024-12-13T01:30:19.418719322Z" level=info msg="RemoveContainer for \"2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a\"" Dec 13 01:30:19.424565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4ebf63becae835817b527149e9a0cf3cba5e37068f180ca978b3c13ef79712c-rootfs.mount: Deactivated successfully. Dec 13 01:30:19.424779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6-rootfs.mount: Deactivated successfully. Dec 13 01:30:19.424870 systemd[1]: var-lib-kubelet-pods-065264e6\x2d568a\x2d4535\x2d9c70\x2d1df4badd6557-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4v86m.mount: Deactivated successfully. Dec 13 01:30:19.424959 systemd[1]: var-lib-kubelet-pods-c16c7b09\x2d3da7\x2d4f50\x2d8901\x2db2eaa675c671-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djsm4p.mount: Deactivated successfully. Dec 13 01:30:19.425036 systemd[1]: var-lib-kubelet-pods-c16c7b09\x2d3da7\x2d4f50\x2d8901\x2db2eaa675c671-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:30:19.425161 systemd[1]: var-lib-kubelet-pods-c16c7b09\x2d3da7\x2d4f50\x2d8901\x2db2eaa675c671-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:30:19.439583 containerd[1829]: time="2024-12-13T01:30:19.439113453Z" level=info msg="RemoveContainer for \"2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a\" returns successfully" Dec 13 01:30:19.440294 kubelet[3394]: I1213 01:30:19.439680 3394 scope.go:117] "RemoveContainer" containerID="2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a" Dec 13 01:30:19.441187 containerd[1829]: time="2024-12-13T01:30:19.441102127Z" level=error msg="ContainerStatus for \"2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a\": not found" Dec 13 01:30:19.441919 kubelet[3394]: E1213 01:30:19.441811 3394 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a\": not found" containerID="2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a" Dec 13 01:30:19.444744 kubelet[3394]: I1213 01:30:19.443747 3394 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a"} err="failed to get container status \"2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a\": rpc error: code = NotFound desc = an error occurred when try to find container \"2660cf7c3ce99c85a3fdd15adb08e4b5ca7cfe1efbc8135376847385a499e28a\": not found" Dec 13 01:30:19.444744 kubelet[3394]: I1213 01:30:19.443779 3394 scope.go:117] "RemoveContainer" containerID="e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea" Dec 13 01:30:19.451136 containerd[1829]: time="2024-12-13T01:30:19.449094180Z" level=info msg="RemoveContainer for \"e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea\"" Dec 13 01:30:19.463282 containerd[1829]: time="2024-12-13T01:30:19.463236132Z" level=info msg="RemoveContainer for \"e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea\" returns successfully" Dec 13 01:30:19.463687 kubelet[3394]: I1213 01:30:19.463565 3394 scope.go:117] "RemoveContainer" containerID="92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42" Dec 13 01:30:19.464692 containerd[1829]: time="2024-12-13T01:30:19.464640127Z" level=info msg="RemoveContainer for \"92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42\"" Dec 13 01:30:19.473587 containerd[1829]: time="2024-12-13T01:30:19.473534498Z" level=info msg="RemoveContainer for \"92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42\" returns successfully" Dec 13 01:30:19.474000 kubelet[3394]: I1213 01:30:19.473815 3394 scope.go:117] "RemoveContainer" containerID="f3eec152724c6789bd9e9f6dac80553ecd3549b3bce4438c1130a25cfade2811" Dec 13 01:30:19.475190 containerd[1829]: time="2024-12-13T01:30:19.475068332Z" level=info msg="RemoveContainer for \"f3eec152724c6789bd9e9f6dac80553ecd3549b3bce4438c1130a25cfade2811\"" Dec 13 01:30:19.484300 containerd[1829]: time="2024-12-13T01:30:19.484259901Z" level=info msg="RemoveContainer for \"f3eec152724c6789bd9e9f6dac80553ecd3549b3bce4438c1130a25cfade2811\" returns successfully" Dec 13 01:30:19.484603 kubelet[3394]: I1213 01:30:19.484570 3394 scope.go:117] "RemoveContainer" containerID="a6d5df5daaa2ea48021fb85cf1a228540de0c92e1fe662a668340e537cc10a3a" Dec 13 01:30:19.485869 containerd[1829]: time="2024-12-13T01:30:19.485827216Z" level=info msg="RemoveContainer for \"a6d5df5daaa2ea48021fb85cf1a228540de0c92e1fe662a668340e537cc10a3a\"" Dec 13 01:30:19.498605 containerd[1829]: time="2024-12-13T01:30:19.498562173Z" level=info msg="RemoveContainer for \"a6d5df5daaa2ea48021fb85cf1a228540de0c92e1fe662a668340e537cc10a3a\" returns successfully" Dec 13 01:30:19.498914 kubelet[3394]: I1213 01:30:19.498884 3394 scope.go:117] "RemoveContainer" containerID="dfed549af4c863b403ad68e14453863f22214e5cb55fb801c8e8134f4f6aaf49" Dec 13 01:30:19.500176 containerd[1829]: time="2024-12-13T01:30:19.500128328Z" level=info msg="RemoveContainer for \"dfed549af4c863b403ad68e14453863f22214e5cb55fb801c8e8134f4f6aaf49\"" Dec 13 01:30:19.509353 containerd[1829]: time="2024-12-13T01:30:19.509309937Z" level=info msg="RemoveContainer for \"dfed549af4c863b403ad68e14453863f22214e5cb55fb801c8e8134f4f6aaf49\" returns successfully" Dec 13 01:30:19.509660 kubelet[3394]: I1213 01:30:19.509554 3394 scope.go:117] "RemoveContainer" containerID="e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea" Dec 13 01:30:19.509923 containerd[1829]: time="2024-12-13T01:30:19.509890975Z" level=error msg="ContainerStatus for \"e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea\": not found" Dec 13 01:30:19.510260 kubelet[3394]: E1213 01:30:19.510102 3394 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea\": not found" containerID="e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea" Dec 13 01:30:19.510260 kubelet[3394]: I1213 01:30:19.510142 3394 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea"} err="failed to get container status \"e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea\": rpc error: code = NotFound desc = an error occurred when try to find container \"e97a2b851ea1f6015ddf6b57a544513ebf041c1c965d5f8dae0f2990c31b1bea\": not found" Dec 13 01:30:19.510260 kubelet[3394]: I1213 01:30:19.510184 3394 scope.go:117] "RemoveContainer" containerID="92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42" Dec 13 01:30:19.510409 containerd[1829]: time="2024-12-13T01:30:19.510386693Z" level=error msg="ContainerStatus for \"92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42\": not found" Dec 13 01:30:19.510536 kubelet[3394]: E1213 01:30:19.510511 3394 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42\": not found" containerID="92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42" Dec 13 01:30:19.510576 kubelet[3394]: I1213 01:30:19.510553 3394 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42"} err="failed to get container status \"92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42\": rpc error: code = NotFound desc = an error occurred when try to find container \"92283f6a2d7598f1916d6bf0daef8ee7cc5b88caee6ade92b4c19fd90e7a2b42\": not found" Dec 13 01:30:19.510576 kubelet[3394]: I1213 01:30:19.510565 3394 scope.go:117] "RemoveContainer" containerID="f3eec152724c6789bd9e9f6dac80553ecd3549b3bce4438c1130a25cfade2811" Dec 13 01:30:19.510761 containerd[1829]: time="2024-12-13T01:30:19.510725692Z" level=error msg="ContainerStatus for \"f3eec152724c6789bd9e9f6dac80553ecd3549b3bce4438c1130a25cfade2811\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3eec152724c6789bd9e9f6dac80553ecd3549b3bce4438c1130a25cfade2811\": not found" Dec 13 01:30:19.510943 kubelet[3394]: E1213 01:30:19.510923 3394 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3eec152724c6789bd9e9f6dac80553ecd3549b3bce4438c1130a25cfade2811\": not found" containerID="f3eec152724c6789bd9e9f6dac80553ecd3549b3bce4438c1130a25cfade2811" Dec 13 01:30:19.510972 kubelet[3394]: I1213 01:30:19.510958 3394 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3eec152724c6789bd9e9f6dac80553ecd3549b3bce4438c1130a25cfade2811"} err="failed to get container status \"f3eec152724c6789bd9e9f6dac80553ecd3549b3bce4438c1130a25cfade2811\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3eec152724c6789bd9e9f6dac80553ecd3549b3bce4438c1130a25cfade2811\": not found" Dec 13 01:30:19.510972 kubelet[3394]: I1213 01:30:19.510971 3394 scope.go:117] "RemoveContainer" containerID="a6d5df5daaa2ea48021fb85cf1a228540de0c92e1fe662a668340e537cc10a3a" Dec 13 01:30:19.511220 containerd[1829]: time="2024-12-13T01:30:19.511191291Z" level=error msg="ContainerStatus for \"a6d5df5daaa2ea48021fb85cf1a228540de0c92e1fe662a668340e537cc10a3a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a6d5df5daaa2ea48021fb85cf1a228540de0c92e1fe662a668340e537cc10a3a\": not found" Dec 13 01:30:19.511334 kubelet[3394]: E1213 01:30:19.511312 3394 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a6d5df5daaa2ea48021fb85cf1a228540de0c92e1fe662a668340e537cc10a3a\": not found" containerID="a6d5df5daaa2ea48021fb85cf1a228540de0c92e1fe662a668340e537cc10a3a" Dec 13 01:30:19.511373 kubelet[3394]: I1213 01:30:19.511345 3394 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6d5df5daaa2ea48021fb85cf1a228540de0c92e1fe662a668340e537cc10a3a"} err="failed to get container status \"a6d5df5daaa2ea48021fb85cf1a228540de0c92e1fe662a668340e537cc10a3a\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6d5df5daaa2ea48021fb85cf1a228540de0c92e1fe662a668340e537cc10a3a\": not found" Dec 13 01:30:19.511373 kubelet[3394]: I1213 01:30:19.511356 3394 scope.go:117] "RemoveContainer" containerID="dfed549af4c863b403ad68e14453863f22214e5cb55fb801c8e8134f4f6aaf49" Dec 13 01:30:19.511539 containerd[1829]: time="2024-12-13T01:30:19.511504930Z" level=error msg="ContainerStatus for \"dfed549af4c863b403ad68e14453863f22214e5cb55fb801c8e8134f4f6aaf49\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dfed549af4c863b403ad68e14453863f22214e5cb55fb801c8e8134f4f6aaf49\": not found" Dec 13 01:30:19.511688 kubelet[3394]: E1213 01:30:19.511670 3394 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dfed549af4c863b403ad68e14453863f22214e5cb55fb801c8e8134f4f6aaf49\": not found" containerID="dfed549af4c863b403ad68e14453863f22214e5cb55fb801c8e8134f4f6aaf49" Dec 13 01:30:19.511716 kubelet[3394]: I1213 01:30:19.511701 3394 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dfed549af4c863b403ad68e14453863f22214e5cb55fb801c8e8134f4f6aaf49"} err="failed to get container status \"dfed549af4c863b403ad68e14453863f22214e5cb55fb801c8e8134f4f6aaf49\": rpc error: code = NotFound desc = an error occurred when try to find container \"dfed549af4c863b403ad68e14453863f22214e5cb55fb801c8e8134f4f6aaf49\": not found" Dec 13 01:30:20.428301 sshd[4989]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:20.431315 systemd[1]: sshd@22-10.200.20.42:22-10.200.16.10:46224.service: Deactivated successfully. Dec 13 01:30:20.434912 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:30:20.436644 systemd-logind[1774]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:30:20.437764 systemd-logind[1774]: Removed session 25. Dec 13 01:30:20.505381 systemd[1]: Started sshd@23-10.200.20.42:22-10.200.16.10:49028.service - OpenSSH per-connection server daemon (10.200.16.10:49028). Dec 13 01:30:20.929846 sshd[5152]: Accepted publickey for core from 10.200.16.10 port 49028 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:20.931292 sshd[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:20.935280 systemd-logind[1774]: New session 26 of user core. Dec 13 01:30:20.941587 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:30:21.027293 kubelet[3394]: I1213 01:30:21.027141 3394 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="065264e6-568a-4535-9c70-1df4badd6557" path="/var/lib/kubelet/pods/065264e6-568a-4535-9c70-1df4badd6557/volumes" Dec 13 01:30:21.028449 kubelet[3394]: I1213 01:30:21.028067 3394 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c16c7b09-3da7-4f50-8901-b2eaa675c671" path="/var/lib/kubelet/pods/c16c7b09-3da7-4f50-8901-b2eaa675c671/volumes" Dec 13 01:30:21.870669 kubelet[3394]: I1213 01:30:21.870620 3394 topology_manager.go:215] "Topology Admit Handler" podUID="81284ef3-b5ce-4cfc-a556-81d27b2e145f" podNamespace="kube-system" podName="cilium-hqw8j" Dec 13 01:30:21.870823 kubelet[3394]: E1213 01:30:21.870693 3394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c16c7b09-3da7-4f50-8901-b2eaa675c671" containerName="mount-cgroup" Dec 13 01:30:21.870823 kubelet[3394]: E1213 01:30:21.870705 3394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c16c7b09-3da7-4f50-8901-b2eaa675c671" containerName="mount-bpf-fs" Dec 13 01:30:21.870823 kubelet[3394]: E1213 01:30:21.870712 3394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c16c7b09-3da7-4f50-8901-b2eaa675c671" containerName="clean-cilium-state" Dec 13 01:30:21.870823 kubelet[3394]: E1213 01:30:21.870718 3394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c16c7b09-3da7-4f50-8901-b2eaa675c671" containerName="cilium-agent" Dec 13 01:30:21.870823 kubelet[3394]: E1213 01:30:21.870726 3394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c16c7b09-3da7-4f50-8901-b2eaa675c671" containerName="apply-sysctl-overwrites" Dec 13 01:30:21.870823 kubelet[3394]: E1213 01:30:21.870732 3394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="065264e6-568a-4535-9c70-1df4badd6557" containerName="cilium-operator" Dec 13 01:30:21.870823 kubelet[3394]: I1213 01:30:21.870755 3394 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16c7b09-3da7-4f50-8901-b2eaa675c671" containerName="cilium-agent" Dec 13 01:30:21.870823 kubelet[3394]: I1213 01:30:21.870762 3394 memory_manager.go:354] "RemoveStaleState removing state" podUID="065264e6-568a-4535-9c70-1df4badd6557" containerName="cilium-operator" Dec 13 01:30:21.902925 sshd[5152]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:21.913372 systemd-logind[1774]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:30:21.913495 systemd[1]: sshd@23-10.200.20.42:22-10.200.16.10:49028.service: Deactivated successfully. Dec 13 01:30:21.917247 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:30:21.923510 systemd-logind[1774]: Removed session 26. Dec 13 01:30:21.929689 kubelet[3394]: I1213 01:30:21.929655 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81284ef3-b5ce-4cfc-a556-81d27b2e145f-cilium-run\") pod \"cilium-hqw8j\" (UID: \"81284ef3-b5ce-4cfc-a556-81d27b2e145f\") " pod="kube-system/cilium-hqw8j" Dec 13 01:30:21.930739 kubelet[3394]: I1213 01:30:21.930582 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81284ef3-b5ce-4cfc-a556-81d27b2e145f-bpf-maps\") pod \"cilium-hqw8j\" (UID: \"81284ef3-b5ce-4cfc-a556-81d27b2e145f\") " pod="kube-system/cilium-hqw8j" Dec 13 01:30:21.930739 kubelet[3394]: I1213 01:30:21.930633 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81284ef3-b5ce-4cfc-a556-81d27b2e145f-cilium-cgroup\") pod \"cilium-hqw8j\" (UID: \"81284ef3-b5ce-4cfc-a556-81d27b2e145f\") " pod="kube-system/cilium-hqw8j" Dec 13 01:30:21.930739 kubelet[3394]: I1213 01:30:21.930658 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/81284ef3-b5ce-4cfc-a556-81d27b2e145f-cilium-ipsec-secrets\") pod \"cilium-hqw8j\" (UID: \"81284ef3-b5ce-4cfc-a556-81d27b2e145f\") " pod="kube-system/cilium-hqw8j" Dec 13 01:30:21.930739 kubelet[3394]: I1213 01:30:21.930680 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81284ef3-b5ce-4cfc-a556-81d27b2e145f-host-proc-sys-kernel\") pod \"cilium-hqw8j\" (UID: \"81284ef3-b5ce-4cfc-a556-81d27b2e145f\") " pod="kube-system/cilium-hqw8j" Dec 13 01:30:21.930739 kubelet[3394]: I1213 01:30:21.930700 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81284ef3-b5ce-4cfc-a556-81d27b2e145f-hubble-tls\") pod \"cilium-hqw8j\" (UID: \"81284ef3-b5ce-4cfc-a556-81d27b2e145f\") " pod="kube-system/cilium-hqw8j" Dec 13 01:30:21.930978 kubelet[3394]: I1213 01:30:21.930775 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81284ef3-b5ce-4cfc-a556-81d27b2e145f-cni-path\") pod \"cilium-hqw8j\" (UID: \"81284ef3-b5ce-4cfc-a556-81d27b2e145f\") " pod="kube-system/cilium-hqw8j" Dec 13 01:30:21.930978 kubelet[3394]: I1213 01:30:21.930819 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81284ef3-b5ce-4cfc-a556-81d27b2e145f-etc-cni-netd\") pod \"cilium-hqw8j\" (UID: \"81284ef3-b5ce-4cfc-a556-81d27b2e145f\") " pod="kube-system/cilium-hqw8j" Dec 13 01:30:21.930978 kubelet[3394]: I1213 01:30:21.930854 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81284ef3-b5ce-4cfc-a556-81d27b2e145f-lib-modules\") pod \"cilium-hqw8j\" (UID: \"81284ef3-b5ce-4cfc-a556-81d27b2e145f\") " pod="kube-system/cilium-hqw8j" Dec 13 01:30:21.930978 kubelet[3394]: I1213 01:30:21.930878 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81284ef3-b5ce-4cfc-a556-81d27b2e145f-xtables-lock\") pod \"cilium-hqw8j\" (UID: \"81284ef3-b5ce-4cfc-a556-81d27b2e145f\") " pod="kube-system/cilium-hqw8j" Dec 13 01:30:21.930978 kubelet[3394]: I1213 01:30:21.930900 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81284ef3-b5ce-4cfc-a556-81d27b2e145f-cilium-config-path\") pod \"cilium-hqw8j\" (UID: \"81284ef3-b5ce-4cfc-a556-81d27b2e145f\") " pod="kube-system/cilium-hqw8j" Dec 13 01:30:21.930978 kubelet[3394]: I1213 01:30:21.930935 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81284ef3-b5ce-4cfc-a556-81d27b2e145f-hostproc\") pod \"cilium-hqw8j\" (UID: \"81284ef3-b5ce-4cfc-a556-81d27b2e145f\") " pod="kube-system/cilium-hqw8j" Dec 13 01:30:21.931160 kubelet[3394]: I1213 01:30:21.930970 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81284ef3-b5ce-4cfc-a556-81d27b2e145f-clustermesh-secrets\") pod \"cilium-hqw8j\" (UID: \"81284ef3-b5ce-4cfc-a556-81d27b2e145f\") " pod="kube-system/cilium-hqw8j" Dec 13 01:30:21.931160 kubelet[3394]: I1213 01:30:21.930991 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81284ef3-b5ce-4cfc-a556-81d27b2e145f-host-proc-sys-net\") pod \"cilium-hqw8j\" (UID: \"81284ef3-b5ce-4cfc-a556-81d27b2e145f\") " pod="kube-system/cilium-hqw8j" Dec 13 01:30:21.931160 kubelet[3394]: I1213 01:30:21.931016 3394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlfcb\" (UniqueName: \"kubernetes.io/projected/81284ef3-b5ce-4cfc-a556-81d27b2e145f-kube-api-access-xlfcb\") pod \"cilium-hqw8j\" (UID: \"81284ef3-b5ce-4cfc-a556-81d27b2e145f\") " pod="kube-system/cilium-hqw8j" Dec 13 01:30:21.978322 systemd[1]: Started sshd@24-10.200.20.42:22-10.200.16.10:49030.service - OpenSSH per-connection server daemon (10.200.16.10:49030). Dec 13 01:30:22.140986 kubelet[3394]: E1213 01:30:22.140414 3394 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:30:22.180432 containerd[1829]: time="2024-12-13T01:30:22.180320321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hqw8j,Uid:81284ef3-b5ce-4cfc-a556-81d27b2e145f,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:22.219274 containerd[1829]: time="2024-12-13T01:30:22.219148867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:22.219274 containerd[1829]: time="2024-12-13T01:30:22.219230226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:22.219274 containerd[1829]: time="2024-12-13T01:30:22.219242666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:22.219838 containerd[1829]: time="2024-12-13T01:30:22.219394386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:22.256476 containerd[1829]: time="2024-12-13T01:30:22.256218979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hqw8j,Uid:81284ef3-b5ce-4cfc-a556-81d27b2e145f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cf17382bca10bd0ba4bb66ecf5ef06476c8195dbe72f940a9b46a9b660b7973\"" Dec 13 01:30:22.259889 containerd[1829]: time="2024-12-13T01:30:22.259694967Z" level=info msg="CreateContainer within sandbox \"1cf17382bca10bd0ba4bb66ecf5ef06476c8195dbe72f940a9b46a9b660b7973\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:30:22.293542 containerd[1829]: time="2024-12-13T01:30:22.293452490Z" level=info msg="CreateContainer within sandbox \"1cf17382bca10bd0ba4bb66ecf5ef06476c8195dbe72f940a9b46a9b660b7973\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9beef6459704fbda6f29fd3e551951fac0bb968e2db2530d066b84a7cb591392\"" Dec 13 01:30:22.294108 containerd[1829]: time="2024-12-13T01:30:22.293974048Z" level=info msg="StartContainer for \"9beef6459704fbda6f29fd3e551951fac0bb968e2db2530d066b84a7cb591392\"" Dec 13 01:30:22.338649 containerd[1829]: time="2024-12-13T01:30:22.338595094Z" level=info msg="StartContainer for \"9beef6459704fbda6f29fd3e551951fac0bb968e2db2530d066b84a7cb591392\" returns successfully" Dec 13 01:30:22.411088 sshd[5167]: Accepted publickey for core from 10.200.16.10 port 49030 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:22.412296 sshd[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:22.416389 systemd-logind[1774]: New session 27 of user core. Dec 13 01:30:22.421284 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:30:22.449307 containerd[1829]: time="2024-12-13T01:30:22.449240471Z" level=info msg="shim disconnected" id=9beef6459704fbda6f29fd3e551951fac0bb968e2db2530d066b84a7cb591392 namespace=k8s.io Dec 13 01:30:22.449307 containerd[1829]: time="2024-12-13T01:30:22.449298591Z" level=warning msg="cleaning up after shim disconnected" id=9beef6459704fbda6f29fd3e551951fac0bb968e2db2530d066b84a7cb591392 namespace=k8s.io Dec 13 01:30:22.449307 containerd[1829]: time="2024-12-13T01:30:22.449308511Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:22.725392 sshd[5167]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:22.728252 systemd[1]: sshd@24-10.200.20.42:22-10.200.16.10:49030.service: Deactivated successfully. Dec 13 01:30:22.733008 systemd-logind[1774]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:30:22.733279 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:30:22.735598 systemd-logind[1774]: Removed session 27. Dec 13 01:30:22.803292 systemd[1]: Started sshd@25-10.200.20.42:22-10.200.16.10:49040.service - OpenSSH per-connection server daemon (10.200.16.10:49040). Dec 13 01:30:23.246878 sshd[5285]: Accepted publickey for core from 10.200.16.10 port 49040 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:23.248253 sshd[5285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:23.252494 systemd-logind[1774]: New session 28 of user core. Dec 13 01:30:23.259346 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 01:30:23.437201 containerd[1829]: time="2024-12-13T01:30:23.436673498Z" level=info msg="CreateContainer within sandbox \"1cf17382bca10bd0ba4bb66ecf5ef06476c8195dbe72f940a9b46a9b660b7973\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:30:23.470160 containerd[1829]: time="2024-12-13T01:30:23.470119222Z" level=info msg="CreateContainer within sandbox \"1cf17382bca10bd0ba4bb66ecf5ef06476c8195dbe72f940a9b46a9b660b7973\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3d20a65cff633a5b20ca96531951468d8629c65536cfa5620fafa9b67308ecda\"" Dec 13 01:30:23.470842 containerd[1829]: time="2024-12-13T01:30:23.470725860Z" level=info msg="StartContainer for \"3d20a65cff633a5b20ca96531951468d8629c65536cfa5620fafa9b67308ecda\"" Dec 13 01:30:23.524794 containerd[1829]: time="2024-12-13T01:30:23.524584114Z" level=info msg="StartContainer for \"3d20a65cff633a5b20ca96531951468d8629c65536cfa5620fafa9b67308ecda\" returns successfully" Dec 13 01:30:23.559892 containerd[1829]: time="2024-12-13T01:30:23.559588193Z" level=info msg="shim disconnected" id=3d20a65cff633a5b20ca96531951468d8629c65536cfa5620fafa9b67308ecda namespace=k8s.io Dec 13 01:30:23.559892 containerd[1829]: time="2024-12-13T01:30:23.559638353Z" level=warning msg="cleaning up after shim disconnected" id=3d20a65cff633a5b20ca96531951468d8629c65536cfa5620fafa9b67308ecda namespace=k8s.io Dec 13 01:30:23.559892 containerd[1829]: time="2024-12-13T01:30:23.559648633Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:24.037854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d20a65cff633a5b20ca96531951468d8629c65536cfa5620fafa9b67308ecda-rootfs.mount: Deactivated successfully. Dec 13 01:30:24.448080 containerd[1829]: time="2024-12-13T01:30:24.446655006Z" level=info msg="CreateContainer within sandbox \"1cf17382bca10bd0ba4bb66ecf5ef06476c8195dbe72f940a9b46a9b660b7973\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:30:24.641509 containerd[1829]: time="2024-12-13T01:30:24.641426173Z" level=info msg="CreateContainer within sandbox \"1cf17382bca10bd0ba4bb66ecf5ef06476c8195dbe72f940a9b46a9b660b7973\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c57376f8dc41d48b75996f122c46deae48ab85c80eb615e7b88c232365f9ab70\"" Dec 13 01:30:24.642311 containerd[1829]: time="2024-12-13T01:30:24.642033811Z" level=info msg="StartContainer for \"c57376f8dc41d48b75996f122c46deae48ab85c80eb615e7b88c232365f9ab70\"" Dec 13 01:30:24.699940 containerd[1829]: time="2024-12-13T01:30:24.699853411Z" level=info msg="StartContainer for \"c57376f8dc41d48b75996f122c46deae48ab85c80eb615e7b88c232365f9ab70\" returns successfully" Dec 13 01:30:24.729678 containerd[1829]: time="2024-12-13T01:30:24.729509949Z" level=info msg="shim disconnected" id=c57376f8dc41d48b75996f122c46deae48ab85c80eb615e7b88c232365f9ab70 namespace=k8s.io Dec 13 01:30:24.729678 containerd[1829]: time="2024-12-13T01:30:24.729571508Z" level=warning msg="cleaning up after shim disconnected" id=c57376f8dc41d48b75996f122c46deae48ab85c80eb615e7b88c232365f9ab70 namespace=k8s.io Dec 13 01:30:24.729678 containerd[1829]: time="2024-12-13T01:30:24.729580388Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:25.025470 kubelet[3394]: E1213 01:30:25.024946 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-jxx94" podUID="a0b1e275-aee8-439d-8c62-c13218164336" Dec 13 01:30:25.037949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c57376f8dc41d48b75996f122c46deae48ab85c80eb615e7b88c232365f9ab70-rootfs.mount: Deactivated successfully. Dec 13 01:30:25.446126 containerd[1829]: time="2024-12-13T01:30:25.445999952Z" level=info msg="CreateContainer within sandbox \"1cf17382bca10bd0ba4bb66ecf5ef06476c8195dbe72f940a9b46a9b660b7973\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:30:25.484123 containerd[1829]: time="2024-12-13T01:30:25.484042620Z" level=info msg="CreateContainer within sandbox \"1cf17382bca10bd0ba4bb66ecf5ef06476c8195dbe72f940a9b46a9b660b7973\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4095e5ff87530491f902ae5c71c568167e1055442f9911a116112916b7f5b94f\"" Dec 13 01:30:25.485607 containerd[1829]: time="2024-12-13T01:30:25.484727618Z" level=info msg="StartContainer for \"4095e5ff87530491f902ae5c71c568167e1055442f9911a116112916b7f5b94f\"" Dec 13 01:30:25.532266 containerd[1829]: time="2024-12-13T01:30:25.532208214Z" level=info msg="StartContainer for \"4095e5ff87530491f902ae5c71c568167e1055442f9911a116112916b7f5b94f\" returns successfully" Dec 13 01:30:25.557261 containerd[1829]: time="2024-12-13T01:30:25.557209527Z" level=info msg="shim disconnected" id=4095e5ff87530491f902ae5c71c568167e1055442f9911a116112916b7f5b94f namespace=k8s.io Dec 13 01:30:25.557619 containerd[1829]: time="2024-12-13T01:30:25.557469287Z" level=warning msg="cleaning up after shim disconnected" id=4095e5ff87530491f902ae5c71c568167e1055442f9911a116112916b7f5b94f namespace=k8s.io Dec 13 01:30:25.557619 containerd[1829]: time="2024-12-13T01:30:25.557486086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:26.037942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4095e5ff87530491f902ae5c71c568167e1055442f9911a116112916b7f5b94f-rootfs.mount: Deactivated successfully. Dec 13 01:30:26.451680 containerd[1829]: time="2024-12-13T01:30:26.451609796Z" level=info msg="CreateContainer within sandbox \"1cf17382bca10bd0ba4bb66ecf5ef06476c8195dbe72f940a9b46a9b660b7973\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:30:26.480313 containerd[1829]: time="2024-12-13T01:30:26.480222137Z" level=info msg="CreateContainer within sandbox \"1cf17382bca10bd0ba4bb66ecf5ef06476c8195dbe72f940a9b46a9b660b7973\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e8c0497241a0c4693faa4f78dd0d1b914a2315cc80e1927599a450c2b7c71526\"" Dec 13 01:30:26.480759 containerd[1829]: time="2024-12-13T01:30:26.480738895Z" level=info msg="StartContainer for \"e8c0497241a0c4693faa4f78dd0d1b914a2315cc80e1927599a450c2b7c71526\"" Dec 13 01:30:26.527450 containerd[1829]: time="2024-12-13T01:30:26.527400214Z" level=info msg="StartContainer for \"e8c0497241a0c4693faa4f78dd0d1b914a2315cc80e1927599a450c2b7c71526\" returns successfully" Dec 13 01:30:26.914074 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 13 01:30:27.026011 kubelet[3394]: E1213 01:30:27.025359 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-jxx94" podUID="a0b1e275-aee8-439d-8c62-c13218164336" Dec 13 01:30:27.054295 containerd[1829]: time="2024-12-13T01:30:27.054254992Z" level=info msg="StopPodSandbox for \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\"" Dec 13 01:30:27.054751 containerd[1829]: time="2024-12-13T01:30:27.054527351Z" level=info msg="TearDown network for sandbox \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\" successfully" Dec 13 01:30:27.054751 containerd[1829]: time="2024-12-13T01:30:27.054543391Z" level=info msg="StopPodSandbox for \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\" returns successfully" Dec 13 01:30:27.055162 containerd[1829]: time="2024-12-13T01:30:27.055003270Z" level=info msg="RemovePodSandbox for \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\"" Dec 13 01:30:27.055162 containerd[1829]: time="2024-12-13T01:30:27.055110029Z" level=info msg="Forcibly stopping sandbox \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\"" Dec 13 01:30:27.055521 containerd[1829]: time="2024-12-13T01:30:27.055311109Z" level=info msg="TearDown network for sandbox \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\" successfully" Dec 13 01:30:27.065620 containerd[1829]: time="2024-12-13T01:30:27.065567193Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:27.065904 containerd[1829]: time="2024-12-13T01:30:27.065738153Z" level=info msg="RemovePodSandbox \"cf6872e1c67c996d16f4aaf39eac93442cc6bf2e37e78bccd5f52afccf194ec6\" returns successfully" Dec 13 01:30:27.066291 containerd[1829]: time="2024-12-13T01:30:27.066238351Z" level=info msg="StopPodSandbox for \"f4ebf63becae835817b527149e9a0cf3cba5e37068f180ca978b3c13ef79712c\"" Dec 13 01:30:27.066369 containerd[1829]: time="2024-12-13T01:30:27.066348351Z" level=info msg="TearDown network for sandbox \"f4ebf63becae835817b527149e9a0cf3cba5e37068f180ca978b3c13ef79712c\" successfully" Dec 13 01:30:27.066406 containerd[1829]: time="2024-12-13T01:30:27.066366710Z" level=info msg="StopPodSandbox for \"f4ebf63becae835817b527149e9a0cf3cba5e37068f180ca978b3c13ef79712c\" returns successfully" Dec 13 01:30:27.067152 containerd[1829]: time="2024-12-13T01:30:27.066665389Z" level=info msg="RemovePodSandbox for \"f4ebf63becae835817b527149e9a0cf3cba5e37068f180ca978b3c13ef79712c\"" Dec 13 01:30:27.067152 containerd[1829]: time="2024-12-13T01:30:27.066689269Z" level=info msg="Forcibly stopping sandbox \"f4ebf63becae835817b527149e9a0cf3cba5e37068f180ca978b3c13ef79712c\"" Dec 13 01:30:27.067152 containerd[1829]: time="2024-12-13T01:30:27.066740149Z" level=info msg="TearDown network for sandbox \"f4ebf63becae835817b527149e9a0cf3cba5e37068f180ca978b3c13ef79712c\" successfully" Dec 13 01:30:27.075567 containerd[1829]: time="2024-12-13T01:30:27.075520199Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4ebf63becae835817b527149e9a0cf3cba5e37068f180ca978b3c13ef79712c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:27.075704 containerd[1829]: time="2024-12-13T01:30:27.075579759Z" level=info msg="RemovePodSandbox \"f4ebf63becae835817b527149e9a0cf3cba5e37068f180ca978b3c13ef79712c\" returns successfully" Dec 13 01:30:27.478780 kubelet[3394]: I1213 01:30:27.478036 3394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-hqw8j" podStartSLOduration=6.477998247 podStartE2EDuration="6.477998247s" podCreationTimestamp="2024-12-13 01:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:30:27.476012414 +0000 UTC m=+180.591729112" watchObservedRunningTime="2024-12-13 01:30:27.477998247 +0000 UTC m=+180.593714905" Dec 13 01:30:29.538431 systemd-networkd[1370]: lxc_health: Link UP Dec 13 01:30:29.551841 systemd-networkd[1370]: lxc_health: Gained carrier Dec 13 01:30:30.960160 systemd-networkd[1370]: lxc_health: Gained IPv6LL Dec 13 01:30:34.148162 update_engine[1779]: I20241213 01:30:34.146405 1779 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 01:30:34.148162 update_engine[1779]: I20241213 01:30:34.146450 1779 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 01:30:34.148162 update_engine[1779]: I20241213 01:30:34.146605 1779 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 01:30:34.148162 update_engine[1779]: I20241213 01:30:34.146951 1779 omaha_request_params.cc:62] Current group set to stable Dec 13 01:30:34.148162 update_engine[1779]: I20241213 01:30:34.147037 1779 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 01:30:34.148162 update_engine[1779]: I20241213 01:30:34.147068 1779 update_attempter.cc:643] Scheduling an action processor start. Dec 13 01:30:34.148162 update_engine[1779]: I20241213 01:30:34.147085 1779 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:30:34.148162 update_engine[1779]: I20241213 01:30:34.147110 1779 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 01:30:34.148162 update_engine[1779]: I20241213 01:30:34.147161 1779 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:30:34.148162 update_engine[1779]: I20241213 01:30:34.147168 1779 omaha_request_action.cc:272] Request: Dec 13 01:30:34.148162 update_engine[1779]: Dec 13 01:30:34.148162 update_engine[1779]: Dec 13 01:30:34.148162 update_engine[1779]: Dec 13 01:30:34.148162 update_engine[1779]: Dec 13 01:30:34.148162 update_engine[1779]: Dec 13 01:30:34.148162 update_engine[1779]: Dec 13 01:30:34.148162 update_engine[1779]: Dec 13 01:30:34.148162 update_engine[1779]: Dec 13 01:30:34.148162 update_engine[1779]: I20241213 01:30:34.147176 1779 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:30:34.149260 update_engine[1779]: I20241213 01:30:34.149240 1779 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:30:34.149632 update_engine[1779]: I20241213 01:30:34.149595 1779 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:30:34.149729 locksmithd[1864]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 01:30:34.155959 kubelet[3394]: E1213 01:30:34.155913 3394 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45988->127.0.0.1:41929: write tcp 127.0.0.1:45988->127.0.0.1:41929: write: broken pipe Dec 13 01:30:34.156804 update_engine[1779]: E20241213 01:30:34.156718 1779 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:30:34.156804 update_engine[1779]: I20241213 01:30:34.156785 1779 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 01:30:36.375529 sshd[5285]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:36.378225 systemd[1]: sshd@25-10.200.20.42:22-10.200.16.10:49040.service: Deactivated successfully. Dec 13 01:30:36.382500 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:30:36.383542 systemd-logind[1774]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:30:36.384605 systemd-logind[1774]: Removed session 28.