Mar 19 11:35:04.343024 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 19 11:35:04.343047 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Mar 19 10:15:40 -00 2025 Mar 19 11:35:04.343056 kernel: KASLR enabled Mar 19 11:35:04.343062 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 19 11:35:04.343070 kernel: printk: bootconsole [pl11] enabled Mar 19 11:35:04.343075 kernel: efi: EFI v2.7 by EDK II Mar 19 11:35:04.343082 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Mar 19 11:35:04.343089 kernel: random: crng init done Mar 19 11:35:04.343095 kernel: secureboot: Secure boot disabled Mar 19 11:35:04.343101 kernel: ACPI: Early table checksum verification disabled Mar 19 11:35:04.343107 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 19 11:35:04.343113 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:35:04.343119 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:35:04.343127 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 19 11:35:04.343134 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:35:04.343140 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:35:04.343147 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:35:04.343155 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:35:04.343161 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:35:04.343168 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:35:04.343174 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 19 11:35:04.343180 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 19 11:35:04.343187 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 19 11:35:04.343193 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 19 11:35:04.343199 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 19 11:35:04.343206 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 19 11:35:04.343212 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 19 11:35:04.343219 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 19 11:35:04.343227 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 19 11:35:04.343233 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 19 11:35:04.343240 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 19 11:35:04.343246 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 19 11:35:04.343252 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 19 11:35:04.343259 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 19 11:35:04.343265 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 19 11:35:04.343271 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 19 11:35:04.343278 kernel: Zone ranges: Mar 19 11:35:04.343284 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 19 11:35:04.343291 kernel: DMA32 empty Mar 19 11:35:04.343297 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 19 11:35:04.343308 kernel: Movable zone start for each node Mar 19 11:35:04.343314 kernel: Early memory node ranges Mar 19 11:35:04.343321 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 19 11:35:04.343328 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Mar 19 11:35:04.343335 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Mar 19 11:35:04.343343 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Mar 19 11:35:04.343350 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 19 11:35:04.343357 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 19 11:35:04.343364 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 19 11:35:04.343370 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 19 11:35:04.343377 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 19 11:35:04.343384 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 19 11:35:04.343391 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 19 11:35:04.343398 kernel: psci: probing for conduit method from ACPI. Mar 19 11:35:04.343404 kernel: psci: PSCIv1.1 detected in firmware. Mar 19 11:35:04.343411 kernel: psci: Using standard PSCI v0.2 function IDs Mar 19 11:35:04.345461 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 19 11:35:04.345481 kernel: psci: SMC Calling Convention v1.4 Mar 19 11:35:04.345489 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 19 11:35:04.345496 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 19 11:35:04.345503 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 19 11:35:04.345510 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 19 11:35:04.345517 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 19 11:35:04.345524 kernel: Detected PIPT I-cache on CPU0 Mar 19 11:35:04.345531 kernel: CPU features: detected: GIC system register CPU interface Mar 19 11:35:04.345538 kernel: CPU features: detected: Hardware dirty bit management Mar 19 11:35:04.345544 kernel: CPU features: detected: Spectre-BHB Mar 19 11:35:04.345551 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 19 11:35:04.345560 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 19 11:35:04.345567 kernel: CPU features: detected: ARM erratum 1418040 Mar 19 11:35:04.345574 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 19 11:35:04.345581 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 19 11:35:04.345588 kernel: alternatives: applying boot alternatives Mar 19 11:35:04.345596 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=41cc5bdd62754423bbb4bbec0e0356d6a1ab5d0ac0f2396a30318f9fb189e7eb Mar 19 11:35:04.345604 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 19 11:35:04.345611 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 19 11:35:04.345618 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 19 11:35:04.345625 kernel: Fallback order for Node 0: 0 Mar 19 11:35:04.345632 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 19 11:35:04.345640 kernel: Policy zone: Normal Mar 19 11:35:04.345647 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 19 11:35:04.345654 kernel: software IO TLB: area num 2. Mar 19 11:35:04.345661 kernel: software IO TLB: mapped [mem 0x0000000036550000-0x000000003a550000] (64MB) Mar 19 11:35:04.345668 kernel: Memory: 3983656K/4194160K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 210504K reserved, 0K cma-reserved) Mar 19 11:35:04.345676 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 19 11:35:04.345682 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 19 11:35:04.345690 kernel: rcu: RCU event tracing is enabled. Mar 19 11:35:04.345697 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 19 11:35:04.345704 kernel: Trampoline variant of Tasks RCU enabled. Mar 19 11:35:04.345712 kernel: Tracing variant of Tasks RCU enabled. Mar 19 11:35:04.345720 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 19 11:35:04.345727 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 19 11:35:04.345734 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 19 11:35:04.345741 kernel: GICv3: 960 SPIs implemented Mar 19 11:35:04.345748 kernel: GICv3: 0 Extended SPIs implemented Mar 19 11:35:04.345755 kernel: Root IRQ handler: gic_handle_irq Mar 19 11:35:04.345762 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 19 11:35:04.345768 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 19 11:35:04.345775 kernel: ITS: No ITS available, not enabling LPIs Mar 19 11:35:04.345782 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 19 11:35:04.345789 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:35:04.345796 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 19 11:35:04.345805 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 19 11:35:04.345812 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 19 11:35:04.345819 kernel: Console: colour dummy device 80x25 Mar 19 11:35:04.345826 kernel: printk: console [tty1] enabled Mar 19 11:35:04.345833 kernel: ACPI: Core revision 20230628 Mar 19 11:35:04.345840 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 19 11:35:04.345848 kernel: pid_max: default: 32768 minimum: 301 Mar 19 11:35:04.345855 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 19 11:35:04.345863 kernel: landlock: Up and running. Mar 19 11:35:04.345871 kernel: SELinux: Initializing. Mar 19 11:35:04.345878 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:35:04.345886 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:35:04.345893 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 11:35:04.345900 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 11:35:04.345907 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Mar 19 11:35:04.345915 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Mar 19 11:35:04.345929 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 19 11:35:04.345937 kernel: rcu: Hierarchical SRCU implementation. Mar 19 11:35:04.345945 kernel: rcu: Max phase no-delay instances is 400. Mar 19 11:35:04.345952 kernel: Remapping and enabling EFI services. Mar 19 11:35:04.345959 kernel: smp: Bringing up secondary CPUs ... Mar 19 11:35:04.345969 kernel: Detected PIPT I-cache on CPU1 Mar 19 11:35:04.345976 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 19 11:35:04.345984 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:35:04.345992 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 19 11:35:04.345999 kernel: smp: Brought up 1 node, 2 CPUs Mar 19 11:35:04.346008 kernel: SMP: Total of 2 processors activated. Mar 19 11:35:04.346016 kernel: CPU features: detected: 32-bit EL0 Support Mar 19 11:35:04.346023 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 19 11:35:04.346031 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 19 11:35:04.346039 kernel: CPU features: detected: CRC32 instructions Mar 19 11:35:04.346046 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 19 11:35:04.346054 kernel: CPU features: detected: LSE atomic instructions Mar 19 11:35:04.346061 kernel: CPU features: detected: Privileged Access Never Mar 19 11:35:04.346069 kernel: CPU: All CPU(s) started at EL1 Mar 19 11:35:04.346078 kernel: alternatives: applying system-wide alternatives Mar 19 11:35:04.346085 kernel: devtmpfs: initialized Mar 19 11:35:04.346093 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 19 11:35:04.346101 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 19 11:35:04.346108 kernel: pinctrl core: initialized pinctrl subsystem Mar 19 11:35:04.346116 kernel: SMBIOS 3.1.0 present. Mar 19 11:35:04.346124 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 19 11:35:04.346131 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 19 11:35:04.346139 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 19 11:35:04.346148 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 19 11:35:04.346156 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 19 11:35:04.346163 kernel: audit: initializing netlink subsys (disabled) Mar 19 11:35:04.346171 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 19 11:35:04.346178 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 19 11:35:04.346186 kernel: cpuidle: using governor menu Mar 19 11:35:04.346193 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 19 11:35:04.346201 kernel: ASID allocator initialised with 32768 entries Mar 19 11:35:04.346208 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 19 11:35:04.346217 kernel: Serial: AMBA PL011 UART driver Mar 19 11:35:04.346225 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 19 11:35:04.346232 kernel: Modules: 0 pages in range for non-PLT usage Mar 19 11:35:04.346239 kernel: Modules: 509280 pages in range for PLT usage Mar 19 11:35:04.346247 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 19 11:35:04.346254 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 19 11:35:04.346262 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 19 11:35:04.346270 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 19 11:35:04.346277 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 19 11:35:04.346286 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 19 11:35:04.346294 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 19 11:35:04.346301 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 19 11:35:04.346309 kernel: ACPI: Added _OSI(Module Device) Mar 19 11:35:04.346316 kernel: ACPI: Added _OSI(Processor Device) Mar 19 11:35:04.346324 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 19 11:35:04.346331 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 19 11:35:04.346339 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 19 11:35:04.346346 kernel: ACPI: Interpreter enabled Mar 19 11:35:04.346355 kernel: ACPI: Using GIC for interrupt routing Mar 19 11:35:04.346363 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 19 11:35:04.346370 kernel: printk: console [ttyAMA0] enabled Mar 19 11:35:04.346378 kernel: printk: bootconsole [pl11] disabled Mar 19 11:35:04.346385 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 19 11:35:04.346393 kernel: iommu: Default domain type: Translated Mar 19 11:35:04.346400 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 19 11:35:04.346408 kernel: efivars: Registered efivars operations Mar 19 11:35:04.346429 kernel: vgaarb: loaded Mar 19 11:35:04.346440 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 19 11:35:04.346448 kernel: VFS: Disk quotas dquot_6.6.0 Mar 19 11:35:04.346456 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 19 11:35:04.346463 kernel: pnp: PnP ACPI init Mar 19 11:35:04.346470 kernel: pnp: PnP ACPI: found 0 devices Mar 19 11:35:04.346478 kernel: NET: Registered PF_INET protocol family Mar 19 11:35:04.346485 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 19 11:35:04.346493 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 19 11:35:04.346500 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 19 11:35:04.346510 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 19 11:35:04.346517 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 19 11:35:04.346525 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 19 11:35:04.346533 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:35:04.346540 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:35:04.346548 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 19 11:35:04.346555 kernel: PCI: CLS 0 bytes, default 64 Mar 19 11:35:04.346563 kernel: kvm [1]: HYP mode not available Mar 19 11:35:04.346570 kernel: Initialise system trusted keyrings Mar 19 11:35:04.346579 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 19 11:35:04.346587 kernel: Key type asymmetric registered Mar 19 11:35:04.346594 kernel: Asymmetric key parser 'x509' registered Mar 19 11:35:04.346601 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 19 11:35:04.346609 kernel: io scheduler mq-deadline registered Mar 19 11:35:04.346616 kernel: io scheduler kyber registered Mar 19 11:35:04.346624 kernel: io scheduler bfq registered Mar 19 11:35:04.346631 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 19 11:35:04.346639 kernel: thunder_xcv, ver 1.0 Mar 19 11:35:04.346647 kernel: thunder_bgx, ver 1.0 Mar 19 11:35:04.346655 kernel: nicpf, ver 1.0 Mar 19 11:35:04.346662 kernel: nicvf, ver 1.0 Mar 19 11:35:04.346819 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 19 11:35:04.346895 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-19T11:35:03 UTC (1742384103) Mar 19 11:35:04.346905 kernel: efifb: probing for efifb Mar 19 11:35:04.346913 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 19 11:35:04.346921 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 19 11:35:04.346930 kernel: efifb: scrolling: redraw Mar 19 11:35:04.346938 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 19 11:35:04.346946 kernel: Console: switching to colour frame buffer device 128x48 Mar 19 11:35:04.346953 kernel: fb0: EFI VGA frame buffer device Mar 19 11:35:04.346961 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 19 11:35:04.346968 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 19 11:35:04.346975 kernel: No ACPI PMU IRQ for CPU0 Mar 19 11:35:04.346983 kernel: No ACPI PMU IRQ for CPU1 Mar 19 11:35:04.346990 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Mar 19 11:35:04.346999 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 19 11:35:04.347007 kernel: watchdog: Hard watchdog permanently disabled Mar 19 11:35:04.347014 kernel: NET: Registered PF_INET6 protocol family Mar 19 11:35:04.347022 kernel: Segment Routing with IPv6 Mar 19 11:35:04.347029 kernel: In-situ OAM (IOAM) with IPv6 Mar 19 11:35:04.347037 kernel: NET: Registered PF_PACKET protocol family Mar 19 11:35:04.347044 kernel: Key type dns_resolver registered Mar 19 11:35:04.347051 kernel: registered taskstats version 1 Mar 19 11:35:04.347059 kernel: Loading compiled-in X.509 certificates Mar 19 11:35:04.347068 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 36392d496708ee63c4af5364493015d5256162ff' Mar 19 11:35:04.347076 kernel: Key type .fscrypt registered Mar 19 11:35:04.347083 kernel: Key type fscrypt-provisioning registered Mar 19 11:35:04.347091 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 19 11:35:04.347098 kernel: ima: Allocated hash algorithm: sha1 Mar 19 11:35:04.347106 kernel: ima: No architecture policies found Mar 19 11:35:04.347113 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 19 11:35:04.347120 kernel: clk: Disabling unused clocks Mar 19 11:35:04.347128 kernel: Freeing unused kernel memory: 38336K Mar 19 11:35:04.347137 kernel: Run /init as init process Mar 19 11:35:04.347144 kernel: with arguments: Mar 19 11:35:04.347152 kernel: /init Mar 19 11:35:04.347159 kernel: with environment: Mar 19 11:35:04.347166 kernel: HOME=/ Mar 19 11:35:04.347173 kernel: TERM=linux Mar 19 11:35:04.347180 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 19 11:35:04.347189 systemd[1]: Successfully made /usr/ read-only. Mar 19 11:35:04.347201 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:35:04.347210 systemd[1]: Detected virtualization microsoft. Mar 19 11:35:04.347218 systemd[1]: Detected architecture arm64. Mar 19 11:35:04.347225 systemd[1]: Running in initrd. Mar 19 11:35:04.347233 systemd[1]: No hostname configured, using default hostname. Mar 19 11:35:04.347241 systemd[1]: Hostname set to . Mar 19 11:35:04.347249 systemd[1]: Initializing machine ID from random generator. Mar 19 11:35:04.347257 systemd[1]: Queued start job for default target initrd.target. Mar 19 11:35:04.347267 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:35:04.347275 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:35:04.347284 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 19 11:35:04.347292 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:35:04.347300 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 19 11:35:04.347309 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 19 11:35:04.347318 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 19 11:35:04.347328 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 19 11:35:04.347336 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:35:04.347344 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:35:04.347353 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:35:04.347361 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:35:04.347369 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:35:04.347377 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:35:04.347385 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:35:04.347394 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:35:04.347403 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 19 11:35:04.347411 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 19 11:35:04.349473 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:35:04.349485 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:35:04.349494 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:35:04.349502 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:35:04.349511 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 19 11:35:04.349519 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:35:04.349532 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 19 11:35:04.349541 systemd[1]: Starting systemd-fsck-usr.service... Mar 19 11:35:04.349549 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:35:04.349557 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:35:04.349600 systemd-journald[218]: Collecting audit messages is disabled. Mar 19 11:35:04.349623 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:35:04.349632 systemd-journald[218]: Journal started Mar 19 11:35:04.349652 systemd-journald[218]: Runtime Journal (/run/log/journal/9c9cb81ff6ef458f9e76ad26cfe79bdc) is 8M, max 78.5M, 70.5M free. Mar 19 11:35:04.359101 systemd-modules-load[220]: Inserted module 'overlay' Mar 19 11:35:04.374962 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:35:04.380906 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 19 11:35:04.415585 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 19 11:35:04.415612 kernel: Bridge firewalling registered Mar 19 11:35:04.405387 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:35:04.422399 systemd-modules-load[220]: Inserted module 'br_netfilter' Mar 19 11:35:04.423280 systemd[1]: Finished systemd-fsck-usr.service. Mar 19 11:35:04.434473 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:35:04.445937 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:35:04.472805 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:35:04.488642 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:35:04.513598 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:35:04.539560 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:35:04.547131 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:35:04.572452 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:35:04.578892 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:35:04.591190 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:35:04.616945 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 19 11:35:04.632602 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:35:04.650127 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:35:04.665115 dracut-cmdline[253]: dracut-dracut-053 Mar 19 11:35:04.665115 dracut-cmdline[253]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=41cc5bdd62754423bbb4bbec0e0356d6a1ab5d0ac0f2396a30318f9fb189e7eb Mar 19 11:35:04.674000 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:35:04.732221 systemd-resolved[256]: Positive Trust Anchors: Mar 19 11:35:04.736717 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:35:04.736754 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:35:04.740648 systemd-resolved[256]: Defaulting to hostname 'linux'. Mar 19 11:35:04.741642 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:35:04.794289 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:35:04.840451 kernel: SCSI subsystem initialized Mar 19 11:35:04.848455 kernel: Loading iSCSI transport class v2.0-870. Mar 19 11:35:04.860474 kernel: iscsi: registered transport (tcp) Mar 19 11:35:04.878633 kernel: iscsi: registered transport (qla4xxx) Mar 19 11:35:04.878664 kernel: QLogic iSCSI HBA Driver Mar 19 11:35:04.920042 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 19 11:35:04.936707 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 19 11:35:04.991536 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 19 11:35:04.991582 kernel: device-mapper: uevent: version 1.0.3 Mar 19 11:35:04.998810 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 19 11:35:05.047443 kernel: raid6: neonx8 gen() 15755 MB/s Mar 19 11:35:05.067445 kernel: raid6: neonx4 gen() 15779 MB/s Mar 19 11:35:05.087429 kernel: raid6: neonx2 gen() 13245 MB/s Mar 19 11:35:05.108429 kernel: raid6: neonx1 gen() 10497 MB/s Mar 19 11:35:05.128428 kernel: raid6: int64x8 gen() 6789 MB/s Mar 19 11:35:05.148430 kernel: raid6: int64x4 gen() 7357 MB/s Mar 19 11:35:05.169429 kernel: raid6: int64x2 gen() 6114 MB/s Mar 19 11:35:05.192580 kernel: raid6: int64x1 gen() 5047 MB/s Mar 19 11:35:05.192609 kernel: raid6: using algorithm neonx4 gen() 15779 MB/s Mar 19 11:35:05.216915 kernel: raid6: .... xor() 12360 MB/s, rmw enabled Mar 19 11:35:05.216984 kernel: raid6: using neon recovery algorithm Mar 19 11:35:05.229304 kernel: xor: measuring software checksum speed Mar 19 11:35:05.229374 kernel: 8regs : 21556 MB/sec Mar 19 11:35:05.233240 kernel: 32regs : 21607 MB/sec Mar 19 11:35:05.236802 kernel: arm64_neon : 27908 MB/sec Mar 19 11:35:05.241042 kernel: xor: using function: arm64_neon (27908 MB/sec) Mar 19 11:35:05.291442 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 19 11:35:05.303735 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:35:05.320618 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:35:05.345449 systemd-udevd[439]: Using default interface naming scheme 'v255'. Mar 19 11:35:05.350991 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:35:05.370550 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 19 11:35:05.396292 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation Mar 19 11:35:05.425512 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:35:05.442915 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:35:05.482373 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:35:05.504601 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 19 11:35:05.528293 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 19 11:35:05.537297 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:35:05.550527 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:35:05.569289 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:35:05.592878 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 19 11:35:05.627398 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:35:05.651952 kernel: hv_vmbus: Vmbus version:5.3 Mar 19 11:35:05.651976 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 19 11:35:05.652500 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:35:05.697452 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 19 11:35:05.697482 kernel: hv_vmbus: registering driver hid_hyperv Mar 19 11:35:05.697492 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 19 11:35:05.697502 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 19 11:35:05.697512 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 19 11:35:05.652672 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:35:05.712172 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 19 11:35:05.739495 kernel: hv_vmbus: registering driver hv_storvsc Mar 19 11:35:05.739513 kernel: PTP clock support registered Mar 19 11:35:05.739523 kernel: scsi host0: storvsc_host_t Mar 19 11:35:05.689654 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:35:05.775014 kernel: hv_vmbus: registering driver hv_netvsc Mar 19 11:35:05.775044 kernel: scsi host1: storvsc_host_t Mar 19 11:35:05.775206 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 19 11:35:05.775227 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 19 11:35:05.726642 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:35:05.726888 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:35:05.733094 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:35:05.765185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:35:05.790293 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:35:05.819741 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:35:05.501698 kernel: hv_utils: Registering HyperV Utility Driver Mar 19 11:35:05.509117 kernel: hv_vmbus: registering driver hv_utils Mar 19 11:35:05.509136 kernel: hv_utils: Heartbeat IC version 3.0 Mar 19 11:35:05.509145 kernel: hv_utils: Shutdown IC version 3.2 Mar 19 11:35:05.509155 kernel: hv_utils: TimeSync IC version 4.0 Mar 19 11:35:05.509163 systemd-journald[218]: Time jumped backwards, rotating. Mar 19 11:35:05.500276 systemd-resolved[256]: Clock change detected. Flushing caches. Mar 19 11:35:05.527779 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 19 11:35:05.542771 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 19 11:35:05.542797 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 19 11:35:05.542931 kernel: hv_netvsc 002248b5-91ea-0022-48b5-91ea002248b5 eth0: VF slot 1 added Mar 19 11:35:05.529491 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:35:05.558253 kernel: hv_vmbus: registering driver hv_pci Mar 19 11:35:05.558302 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 19 11:35:05.631858 kernel: hv_pci e17a4cc8-910e-48f4-b151-a9aa31c2fdb8: PCI VMBus probing: Using version 0x10004 Mar 19 11:35:05.676337 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 19 11:35:05.676488 kernel: hv_pci e17a4cc8-910e-48f4-b151-a9aa31c2fdb8: PCI host bridge to bus 910e:00 Mar 19 11:35:05.676579 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 19 11:35:05.676667 kernel: pci_bus 910e:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 19 11:35:05.676773 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 19 11:35:05.676862 kernel: pci_bus 910e:00: No busn resource found for root bus, will use [bus 00-ff] Mar 19 11:35:05.676950 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 19 11:35:05.677040 kernel: pci 910e:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 19 11:35:05.677152 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 11:35:05.677161 kernel: pci 910e:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 19 11:35:05.677283 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 19 11:35:05.677383 kernel: pci 910e:00:02.0: enabling Extended Tags Mar 19 11:35:05.677474 kernel: pci 910e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 910e:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 19 11:35:05.677561 kernel: pci_bus 910e:00: busn_res: [bus 00-ff] end is updated to 00 Mar 19 11:35:05.677644 kernel: pci 910e:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 19 11:35:05.716442 kernel: mlx5_core 910e:00:02.0: enabling device (0000 -> 0002) Mar 19 11:35:06.013873 kernel: mlx5_core 910e:00:02.0: firmware version: 16.31.2424 Mar 19 11:35:06.014018 kernel: hv_netvsc 002248b5-91ea-0022-48b5-91ea002248b5 eth0: VF registering: eth1 Mar 19 11:35:06.014134 kernel: mlx5_core 910e:00:02.0 eth1: joined to eth0 Mar 19 11:35:06.014727 kernel: mlx5_core 910e:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 19 11:35:06.021264 kernel: mlx5_core 910e:00:02.0 enP37134s1: renamed from eth1 Mar 19 11:35:06.149172 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 19 11:35:06.257289 kernel: BTRFS: device fsid 7c80927c-98c3-4e81-a933-b7f5e1234bd2 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (490) Mar 19 11:35:06.273177 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 19 11:35:06.287480 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (500) Mar 19 11:35:06.288002 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 19 11:35:06.313826 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 19 11:35:06.328900 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 19 11:35:06.346353 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 19 11:35:06.368677 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 11:35:06.378254 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 11:35:07.389462 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 11:35:07.389519 disk-uuid[605]: The operation has completed successfully. Mar 19 11:35:07.454596 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 19 11:35:07.454682 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 19 11:35:07.503366 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 19 11:35:07.516679 sh[691]: Success Mar 19 11:35:07.547267 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 19 11:35:07.728065 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 19 11:35:07.747359 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 19 11:35:07.757111 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 19 11:35:07.788167 kernel: BTRFS info (device dm-0): first mount of filesystem 7c80927c-98c3-4e81-a933-b7f5e1234bd2 Mar 19 11:35:07.788246 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:35:07.795138 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 19 11:35:07.795183 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 19 11:35:07.805437 kernel: BTRFS info (device dm-0): using free space tree Mar 19 11:35:08.092646 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 19 11:35:08.098265 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 19 11:35:08.117486 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 19 11:35:08.126419 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 19 11:35:08.170854 kernel: BTRFS info (device sda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:35:08.171276 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:35:08.175635 kernel: BTRFS info (device sda6): using free space tree Mar 19 11:35:08.196016 kernel: BTRFS info (device sda6): auto enabling async discard Mar 19 11:35:08.203101 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 19 11:35:08.217750 kernel: BTRFS info (device sda6): last unmount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:35:08.226359 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 19 11:35:08.242635 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 19 11:35:08.266258 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:35:08.281418 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:35:08.313079 systemd-networkd[876]: lo: Link UP Mar 19 11:35:08.313094 systemd-networkd[876]: lo: Gained carrier Mar 19 11:35:08.314837 systemd-networkd[876]: Enumeration completed Mar 19 11:35:08.315440 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:35:08.315443 systemd-networkd[876]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:35:08.316664 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:35:08.326685 systemd[1]: Reached target network.target - Network. Mar 19 11:35:08.414255 kernel: mlx5_core 910e:00:02.0 enP37134s1: Link up Mar 19 11:35:08.492251 kernel: hv_netvsc 002248b5-91ea-0022-48b5-91ea002248b5 eth0: Data path switched to VF: enP37134s1 Mar 19 11:35:08.493060 systemd-networkd[876]: enP37134s1: Link UP Mar 19 11:35:08.493154 systemd-networkd[876]: eth0: Link UP Mar 19 11:35:08.493300 systemd-networkd[876]: eth0: Gained carrier Mar 19 11:35:08.493308 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:35:08.516782 systemd-networkd[876]: enP37134s1: Gained carrier Mar 19 11:35:08.533282 systemd-networkd[876]: eth0: DHCPv4 address 10.200.20.43/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 19 11:35:08.833134 ignition[859]: Ignition 2.20.0 Mar 19 11:35:08.833147 ignition[859]: Stage: fetch-offline Mar 19 11:35:08.835253 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:35:08.833186 ignition[859]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:35:08.849509 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 19 11:35:08.833195 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:35:08.833332 ignition[859]: parsed url from cmdline: "" Mar 19 11:35:08.833336 ignition[859]: no config URL provided Mar 19 11:35:08.833341 ignition[859]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 11:35:08.833348 ignition[859]: no config at "/usr/lib/ignition/user.ign" Mar 19 11:35:08.833353 ignition[859]: failed to fetch config: resource requires networking Mar 19 11:35:08.833531 ignition[859]: Ignition finished successfully Mar 19 11:35:08.879541 ignition[885]: Ignition 2.20.0 Mar 19 11:35:08.879551 ignition[885]: Stage: fetch Mar 19 11:35:08.879758 ignition[885]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:35:08.879768 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:35:08.879922 ignition[885]: parsed url from cmdline: "" Mar 19 11:35:08.879926 ignition[885]: no config URL provided Mar 19 11:35:08.879931 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 11:35:08.879943 ignition[885]: no config at "/usr/lib/ignition/user.ign" Mar 19 11:35:08.879970 ignition[885]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 19 11:35:08.970036 ignition[885]: GET result: OK Mar 19 11:35:08.970124 ignition[885]: config has been read from IMDS userdata Mar 19 11:35:08.974661 unknown[885]: fetched base config from "system" Mar 19 11:35:08.970157 ignition[885]: parsing config with SHA512: ef4cf2b2b3e3167e10a2f5ec2d160e8b26281574cd015ddc512ed4d9e54284a91c320398efe2a29b857b0a1f31a83ce5d9bc2dc4dc0f2789d21658c6c3cea230 Mar 19 11:35:08.974668 unknown[885]: fetched base config from "system" Mar 19 11:35:08.975397 ignition[885]: fetch: fetch complete Mar 19 11:35:08.974675 unknown[885]: fetched user config from "azure" Mar 19 11:35:08.975403 ignition[885]: fetch: fetch passed Mar 19 11:35:08.980181 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 19 11:35:08.975473 ignition[885]: Ignition finished successfully Mar 19 11:35:08.996503 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 19 11:35:09.017166 ignition[891]: Ignition 2.20.0 Mar 19 11:35:09.023444 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 19 11:35:09.017180 ignition[891]: Stage: kargs Mar 19 11:35:09.045534 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 19 11:35:09.017417 ignition[891]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:35:09.017427 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:35:09.018484 ignition[891]: kargs: kargs passed Mar 19 11:35:09.018534 ignition[891]: Ignition finished successfully Mar 19 11:35:09.079314 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 19 11:35:09.072125 ignition[898]: Ignition 2.20.0 Mar 19 11:35:09.089167 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 19 11:35:09.072132 ignition[898]: Stage: disks Mar 19 11:35:09.100321 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 19 11:35:09.072368 ignition[898]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:35:09.110698 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:35:09.072378 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:35:09.123889 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:35:09.073545 ignition[898]: disks: disks passed Mar 19 11:35:09.132569 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:35:09.073604 ignition[898]: Ignition finished successfully Mar 19 11:35:09.163485 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 19 11:35:09.252285 systemd-fsck[907]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 19 11:35:09.265327 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 19 11:35:09.283456 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 19 11:35:09.343245 kernel: EXT4-fs (sda9): mounted filesystem 45bb9a4a-80dc-4ce4-9ca9-c4944d8ff0e6 r/w with ordered data mode. Quota mode: none. Mar 19 11:35:09.344094 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 19 11:35:09.348957 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 19 11:35:09.391332 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:35:09.399401 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 19 11:35:09.411731 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 19 11:35:09.425497 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 19 11:35:09.463797 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (918) Mar 19 11:35:09.463822 kernel: BTRFS info (device sda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:35:09.463833 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:35:09.425544 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:35:09.486303 kernel: BTRFS info (device sda6): using free space tree Mar 19 11:35:09.486326 kernel: BTRFS info (device sda6): auto enabling async discard Mar 19 11:35:09.452201 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 19 11:35:09.492480 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 19 11:35:09.499982 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:35:09.568371 systemd-networkd[876]: eth0: Gained IPv6LL Mar 19 11:35:09.853748 coreos-metadata[920]: Mar 19 11:35:09.853 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 19 11:35:09.865185 coreos-metadata[920]: Mar 19 11:35:09.865 INFO Fetch successful Mar 19 11:35:09.865185 coreos-metadata[920]: Mar 19 11:35:09.865 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 19 11:35:09.883240 coreos-metadata[920]: Mar 19 11:35:09.882 INFO Fetch successful Mar 19 11:35:09.896513 coreos-metadata[920]: Mar 19 11:35:09.896 INFO wrote hostname ci-4230.1.0-a-4d5ab2e439 to /sysroot/etc/hostname Mar 19 11:35:09.906093 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 19 11:35:10.016360 systemd-networkd[876]: enP37134s1: Gained IPv6LL Mar 19 11:35:10.042745 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Mar 19 11:35:10.092133 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Mar 19 11:35:10.101753 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Mar 19 11:35:10.110725 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Mar 19 11:35:10.727350 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 19 11:35:10.742385 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 19 11:35:10.749623 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 19 11:35:10.767482 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 19 11:35:10.782087 kernel: BTRFS info (device sda6): last unmount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:35:10.802546 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 19 11:35:10.812191 ignition[1042]: INFO : Ignition 2.20.0 Mar 19 11:35:10.818449 ignition[1042]: INFO : Stage: mount Mar 19 11:35:10.818449 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:35:10.818449 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:35:10.818449 ignition[1042]: INFO : mount: mount passed Mar 19 11:35:10.818449 ignition[1042]: INFO : Ignition finished successfully Mar 19 11:35:10.822024 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 19 11:35:10.846410 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 19 11:35:10.865452 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:35:10.894887 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1054) Mar 19 11:35:10.894938 kernel: BTRFS info (device sda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:35:10.900999 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:35:10.905287 kernel: BTRFS info (device sda6): using free space tree Mar 19 11:35:10.912255 kernel: BTRFS info (device sda6): auto enabling async discard Mar 19 11:35:10.913569 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:35:10.938573 ignition[1072]: INFO : Ignition 2.20.0 Mar 19 11:35:10.944023 ignition[1072]: INFO : Stage: files Mar 19 11:35:10.944023 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:35:10.944023 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:35:10.944023 ignition[1072]: DEBUG : files: compiled without relabeling support, skipping Mar 19 11:35:10.967550 ignition[1072]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 19 11:35:10.967550 ignition[1072]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 19 11:35:11.040753 ignition[1072]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 19 11:35:11.048174 ignition[1072]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 19 11:35:11.048174 ignition[1072]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 19 11:35:11.041154 unknown[1072]: wrote ssh authorized keys file for user: core Mar 19 11:35:11.091562 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 19 11:35:11.102921 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 19 11:35:11.176379 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 19 11:35:13.743767 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 19 11:35:13.755662 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 19 11:35:13.755662 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 19 11:35:13.755662 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:35:13.755662 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:35:13.755662 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:35:13.755662 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:35:13.755662 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:35:13.755662 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:35:13.755662 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:35:13.755662 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:35:13.755662 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:35:13.755662 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:35:13.755662 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:35:13.755662 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 19 11:35:14.065580 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 19 11:35:14.257940 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:35:14.257940 ignition[1072]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 19 11:35:14.290339 ignition[1072]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:35:14.290339 ignition[1072]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:35:14.290339 ignition[1072]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 19 11:35:14.290339 ignition[1072]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 19 11:35:14.290339 ignition[1072]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 19 11:35:14.290339 ignition[1072]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:35:14.290339 ignition[1072]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:35:14.290339 ignition[1072]: INFO : files: files passed Mar 19 11:35:14.290339 ignition[1072]: INFO : Ignition finished successfully Mar 19 11:35:14.292272 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 19 11:35:14.337543 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 19 11:35:14.355422 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 19 11:35:14.385139 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 19 11:35:14.417096 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:35:14.417096 initrd-setup-root-after-ignition[1099]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:35:14.385264 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 19 11:35:14.450874 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:35:14.398772 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:35:14.412587 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 19 11:35:14.451557 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 19 11:35:14.494028 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 19 11:35:14.494158 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 19 11:35:14.505732 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 19 11:35:14.517976 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 19 11:35:14.528555 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 19 11:35:14.543511 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 19 11:35:14.569078 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:35:14.588539 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 19 11:35:14.609316 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 19 11:35:14.609429 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 19 11:35:14.621491 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:35:14.634085 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:35:14.646491 systemd[1]: Stopped target timers.target - Timer Units. Mar 19 11:35:14.657674 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 19 11:35:14.657766 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:35:14.674045 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 19 11:35:14.685829 systemd[1]: Stopped target basic.target - Basic System. Mar 19 11:35:14.695869 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 19 11:35:14.714191 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:35:14.725666 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 19 11:35:14.737584 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 19 11:35:14.748956 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:35:14.761166 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 19 11:35:14.772981 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 19 11:35:14.783565 systemd[1]: Stopped target swap.target - Swaps. Mar 19 11:35:14.792803 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 19 11:35:14.792898 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:35:14.806708 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:35:14.813278 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:35:14.825042 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 19 11:35:14.825099 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:35:14.837134 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 19 11:35:14.837236 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 19 11:35:14.853916 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 19 11:35:14.853977 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:35:14.868072 systemd[1]: ignition-files.service: Deactivated successfully. Mar 19 11:35:14.868129 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 19 11:35:14.881248 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 19 11:35:14.881311 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 19 11:35:14.944399 ignition[1125]: INFO : Ignition 2.20.0 Mar 19 11:35:14.944399 ignition[1125]: INFO : Stage: umount Mar 19 11:35:14.944399 ignition[1125]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:35:14.944399 ignition[1125]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 19 11:35:14.944399 ignition[1125]: INFO : umount: umount passed Mar 19 11:35:14.944399 ignition[1125]: INFO : Ignition finished successfully Mar 19 11:35:14.912430 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 19 11:35:14.937126 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 19 11:35:14.948692 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 19 11:35:14.948773 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:35:14.965906 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 19 11:35:14.965976 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:35:14.978760 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 19 11:35:14.978851 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 19 11:35:14.987318 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 19 11:35:14.987671 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 19 11:35:14.987714 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 19 11:35:14.995592 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 19 11:35:14.995652 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 19 11:35:15.005628 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 19 11:35:15.005681 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 19 11:35:15.017102 systemd[1]: Stopped target network.target - Network. Mar 19 11:35:15.022095 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 19 11:35:15.022171 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:35:15.033208 systemd[1]: Stopped target paths.target - Path Units. Mar 19 11:35:15.043134 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 19 11:35:15.048248 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:35:15.055920 systemd[1]: Stopped target slices.target - Slice Units. Mar 19 11:35:15.066597 systemd[1]: Stopped target sockets.target - Socket Units. Mar 19 11:35:15.077572 systemd[1]: iscsid.socket: Deactivated successfully. Mar 19 11:35:15.077618 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:35:15.087493 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 19 11:35:15.087525 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:35:15.098025 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 19 11:35:15.098082 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 19 11:35:15.107988 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 19 11:35:15.108039 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 19 11:35:15.118495 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 19 11:35:15.128157 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 19 11:35:15.138726 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 19 11:35:15.138825 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 19 11:35:15.149663 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 19 11:35:15.149770 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 19 11:35:15.165958 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 19 11:35:15.166191 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 19 11:35:15.166445 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 19 11:35:15.184667 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 19 11:35:15.186932 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 19 11:35:15.186990 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:35:15.425846 kernel: hv_netvsc 002248b5-91ea-0022-48b5-91ea002248b5 eth0: Data path switched from VF: enP37134s1 Mar 19 11:35:15.196312 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 19 11:35:15.196390 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 19 11:35:15.226691 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 19 11:35:15.233844 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 19 11:35:15.233926 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:35:15.244538 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:35:15.244588 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:35:15.259338 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 19 11:35:15.259405 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 19 11:35:15.266317 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 19 11:35:15.266383 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:35:15.282976 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:35:15.290012 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 19 11:35:15.290083 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:35:15.311849 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 19 11:35:15.312004 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:35:15.324094 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 19 11:35:15.324145 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 19 11:35:15.333987 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 19 11:35:15.334019 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:35:15.343692 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 19 11:35:15.343750 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:35:15.361172 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 19 11:35:15.361242 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 19 11:35:15.373199 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:35:15.373256 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:35:15.404882 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 19 11:35:15.419640 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 19 11:35:15.645916 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Mar 19 11:35:15.419715 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:35:15.437021 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 19 11:35:15.437082 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:35:15.444111 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 19 11:35:15.444169 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:35:15.456313 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:35:15.456369 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:35:15.473640 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 19 11:35:15.473706 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:35:15.474008 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 19 11:35:15.474128 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 19 11:35:15.486520 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 19 11:35:15.486614 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 19 11:35:15.496698 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 19 11:35:15.530494 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 19 11:35:15.551025 systemd[1]: Switching root. Mar 19 11:35:15.740439 systemd-journald[218]: Journal stopped Mar 19 11:35:20.275371 kernel: SELinux: policy capability network_peer_controls=1 Mar 19 11:35:20.275395 kernel: SELinux: policy capability open_perms=1 Mar 19 11:35:20.275406 kernel: SELinux: policy capability extended_socket_class=1 Mar 19 11:35:20.275414 kernel: SELinux: policy capability always_check_network=0 Mar 19 11:35:20.275424 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 19 11:35:20.275431 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 19 11:35:20.275440 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 19 11:35:20.275447 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 19 11:35:20.275456 kernel: audit: type=1403 audit(1742384116.967:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 19 11:35:20.275466 systemd[1]: Successfully loaded SELinux policy in 144.129ms. Mar 19 11:35:20.275477 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.551ms. Mar 19 11:35:20.275487 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:35:20.275495 systemd[1]: Detected virtualization microsoft. Mar 19 11:35:20.275503 systemd[1]: Detected architecture arm64. Mar 19 11:35:20.275512 systemd[1]: Detected first boot. Mar 19 11:35:20.275522 systemd[1]: Hostname set to . Mar 19 11:35:20.275531 systemd[1]: Initializing machine ID from random generator. Mar 19 11:35:20.275540 zram_generator::config[1168]: No configuration found. Mar 19 11:35:20.275549 kernel: NET: Registered PF_VSOCK protocol family Mar 19 11:35:20.275557 systemd[1]: Populated /etc with preset unit settings. Mar 19 11:35:20.275567 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 19 11:35:20.275575 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 19 11:35:20.275585 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 19 11:35:20.275594 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 19 11:35:20.275603 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 19 11:35:20.275612 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 19 11:35:20.275621 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 19 11:35:20.275630 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 19 11:35:20.275639 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 19 11:35:20.275650 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 19 11:35:20.275659 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 19 11:35:20.275668 systemd[1]: Created slice user.slice - User and Session Slice. Mar 19 11:35:20.275676 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:35:20.275685 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:35:20.275694 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 19 11:35:20.275703 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 19 11:35:20.275712 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 19 11:35:20.275723 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:35:20.275731 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 19 11:35:20.275740 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:35:20.275752 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 19 11:35:20.275761 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 19 11:35:20.275770 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 19 11:35:20.275779 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 19 11:35:20.275788 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:35:20.275799 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:35:20.275808 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:35:20.275816 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:35:20.275825 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 19 11:35:20.275834 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 19 11:35:20.275845 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 19 11:35:20.275856 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:35:20.275865 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:35:20.275874 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:35:20.275884 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 19 11:35:20.275893 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 19 11:35:20.275902 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 19 11:35:20.275911 systemd[1]: Mounting media.mount - External Media Directory... Mar 19 11:35:20.275921 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 19 11:35:20.275930 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 19 11:35:20.275940 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 19 11:35:20.275949 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 19 11:35:20.275959 systemd[1]: Reached target machines.target - Containers. Mar 19 11:35:20.275968 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 19 11:35:20.275977 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:35:20.275987 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:35:20.275997 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 19 11:35:20.276006 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:35:20.276015 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:35:20.276025 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:35:20.276034 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 19 11:35:20.276045 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:35:20.276054 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 19 11:35:20.276063 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 19 11:35:20.276074 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 19 11:35:20.276083 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 19 11:35:20.276092 systemd[1]: Stopped systemd-fsck-usr.service. Mar 19 11:35:20.276102 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:35:20.276111 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:35:20.276120 kernel: loop: module loaded Mar 19 11:35:20.276128 kernel: ACPI: bus type drm_connector registered Mar 19 11:35:20.276136 kernel: fuse: init (API version 7.39) Mar 19 11:35:20.276160 systemd-journald[1272]: Collecting audit messages is disabled. Mar 19 11:35:20.276182 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:35:20.276193 systemd-journald[1272]: Journal started Mar 19 11:35:20.276215 systemd-journald[1272]: Runtime Journal (/run/log/journal/32fa5bcbbebb441dba14caf829d23735) is 8M, max 78.5M, 70.5M free. Mar 19 11:35:19.356491 systemd[1]: Queued start job for default target multi-user.target. Mar 19 11:35:19.368217 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 19 11:35:19.368641 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 19 11:35:19.368984 systemd[1]: systemd-journald.service: Consumed 3.209s CPU time. Mar 19 11:35:20.298252 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 19 11:35:20.316309 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 19 11:35:20.334251 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 19 11:35:20.351522 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:35:20.351880 systemd[1]: verity-setup.service: Deactivated successfully. Mar 19 11:35:20.360305 systemd[1]: Stopped verity-setup.service. Mar 19 11:35:20.377665 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:35:20.378511 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 19 11:35:20.384408 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 19 11:35:20.390763 systemd[1]: Mounted media.mount - External Media Directory. Mar 19 11:35:20.396033 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 19 11:35:20.401851 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 19 11:35:20.407902 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 19 11:35:20.413189 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 19 11:35:20.423266 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:35:20.430037 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 19 11:35:20.430213 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 19 11:35:20.436867 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:35:20.437035 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:35:20.443677 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:35:20.443839 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:35:20.459445 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:35:20.459613 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:35:20.466301 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 19 11:35:20.466472 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 19 11:35:20.473286 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:35:20.473451 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:35:20.479534 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:35:20.485772 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 19 11:35:20.492840 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 19 11:35:20.499649 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 19 11:35:20.506745 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:35:20.526071 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 19 11:35:20.539341 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 19 11:35:20.546534 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 19 11:35:20.552670 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 19 11:35:20.552712 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:35:20.559303 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 19 11:35:20.567215 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 19 11:35:20.574450 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 19 11:35:20.580576 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:35:20.594168 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 19 11:35:20.603450 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 19 11:35:20.611981 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:35:20.613093 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 19 11:35:20.620699 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:35:20.621800 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:35:20.630465 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 19 11:35:20.640994 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:35:20.649272 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 19 11:35:20.657916 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 19 11:35:20.666797 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 19 11:35:20.679545 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 19 11:35:20.687972 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 19 11:35:20.702287 udevadm[1311]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 19 11:35:20.703057 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 19 11:35:20.716249 kernel: loop0: detected capacity change from 0 to 123192 Mar 19 11:35:20.719130 systemd-journald[1272]: Time spent on flushing to /var/log/journal/32fa5bcbbebb441dba14caf829d23735 is 64.221ms for 918 entries. Mar 19 11:35:20.719130 systemd-journald[1272]: System Journal (/var/log/journal/32fa5bcbbebb441dba14caf829d23735) is 11.8M, max 2.6G, 2.6G free. Mar 19 11:35:20.871042 systemd-journald[1272]: Received client request to flush runtime journal. Mar 19 11:35:20.871095 systemd-journald[1272]: /var/log/journal/32fa5bcbbebb441dba14caf829d23735/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Mar 19 11:35:20.871124 systemd-journald[1272]: Rotating system journal. Mar 19 11:35:20.725512 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 19 11:35:20.731843 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:35:20.769078 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Mar 19 11:35:20.769094 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Mar 19 11:35:20.775287 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:35:20.785475 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 19 11:35:20.872196 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 19 11:35:20.873248 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 19 11:35:20.881118 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 19 11:35:21.072260 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 19 11:35:21.099251 kernel: loop1: detected capacity change from 0 to 189592 Mar 19 11:35:21.128411 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 19 11:35:21.140151 kernel: loop2: detected capacity change from 0 to 113512 Mar 19 11:35:21.147494 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:35:21.166211 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Mar 19 11:35:21.166247 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Mar 19 11:35:21.170283 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:35:21.451272 kernel: loop3: detected capacity change from 0 to 28720 Mar 19 11:35:21.657735 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 19 11:35:21.670399 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:35:21.692944 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Mar 19 11:35:21.735256 kernel: loop4: detected capacity change from 0 to 123192 Mar 19 11:35:21.744254 kernel: loop5: detected capacity change from 0 to 189592 Mar 19 11:35:21.753243 kernel: loop6: detected capacity change from 0 to 113512 Mar 19 11:35:21.762238 kernel: loop7: detected capacity change from 0 to 28720 Mar 19 11:35:21.765586 (sd-merge)[1339]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 19 11:35:21.766025 (sd-merge)[1339]: Merged extensions into '/usr'. Mar 19 11:35:21.769441 systemd[1]: Reload requested from client PID 1308 ('systemd-sysext') (unit systemd-sysext.service)... Mar 19 11:35:21.769566 systemd[1]: Reloading... Mar 19 11:35:21.835262 zram_generator::config[1367]: No configuration found. Mar 19 11:35:22.004743 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:35:22.135192 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 19 11:35:22.136794 systemd[1]: Reloading finished in 366 ms. Mar 19 11:35:22.144262 kernel: mousedev: PS/2 mouse device common for all mice Mar 19 11:35:22.144431 kernel: hv_vmbus: registering driver hyperv_fb Mar 19 11:35:22.144454 kernel: hv_vmbus: registering driver hv_balloon Mar 19 11:35:22.167267 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 19 11:35:22.173241 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 19 11:35:22.186608 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 19 11:35:22.186717 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 19 11:35:22.198913 kernel: Console: switching to colour dummy device 80x25 Mar 19 11:35:22.198886 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:35:22.214819 kernel: Console: switching to colour frame buffer device 128x48 Mar 19 11:35:22.217261 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 19 11:35:22.260610 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1425) Mar 19 11:35:22.263411 systemd[1]: Starting ensure-sysext.service... Mar 19 11:35:22.288117 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:35:22.310441 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:35:22.334576 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:35:22.346125 systemd-tmpfiles[1507]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 19 11:35:22.346697 systemd-tmpfiles[1507]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 19 11:35:22.347472 systemd-tmpfiles[1507]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 19 11:35:22.347794 systemd-tmpfiles[1507]: ACLs are not supported, ignoring. Mar 19 11:35:22.347943 systemd-tmpfiles[1507]: ACLs are not supported, ignoring. Mar 19 11:35:22.353119 systemd-tmpfiles[1507]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:35:22.353130 systemd-tmpfiles[1507]: Skipping /boot Mar 19 11:35:22.362892 systemd-tmpfiles[1507]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:35:22.363486 systemd-tmpfiles[1507]: Skipping /boot Mar 19 11:35:22.385438 systemd[1]: Reload requested from client PID 1476 ('systemctl') (unit ensure-sysext.service)... Mar 19 11:35:22.385455 systemd[1]: Reloading... Mar 19 11:35:22.465315 zram_generator::config[1556]: No configuration found. Mar 19 11:35:22.585186 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:35:22.692881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 19 11:35:22.699458 systemd[1]: Reloading finished in 313 ms. Mar 19 11:35:22.726364 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:35:22.745602 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 19 11:35:22.767552 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:35:22.775533 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 19 11:35:22.785111 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:35:22.792500 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 19 11:35:22.804534 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:35:22.816336 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:35:22.829163 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:35:22.835211 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:35:22.838957 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 19 11:35:22.848383 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:35:22.863575 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 19 11:35:22.875987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:35:22.884131 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 19 11:35:22.893374 lvm[1617]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:35:22.899435 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 19 11:35:22.919486 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:35:22.919707 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:35:22.928196 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:35:22.929385 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 19 11:35:22.939091 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:35:22.939313 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:35:22.947521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:35:22.948566 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:35:22.958031 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:35:22.958718 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:35:22.969471 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 19 11:35:22.977142 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 19 11:35:22.988872 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 19 11:35:23.020171 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:35:23.030513 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:35:23.037424 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 19 11:35:23.057670 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:35:23.067415 lvm[1658]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:35:23.078561 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:35:23.089533 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:35:23.097661 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:35:23.097816 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:35:23.104492 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:35:23.123322 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 19 11:35:23.130571 augenrules[1668]: No rules Mar 19 11:35:23.131978 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 19 11:35:23.143019 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:35:23.143330 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:35:23.151208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:35:23.151646 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:35:23.154991 systemd-resolved[1630]: Positive Trust Anchors: Mar 19 11:35:23.155359 systemd-resolved[1630]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:35:23.155453 systemd-resolved[1630]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:35:23.159072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:35:23.159338 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:35:23.161785 systemd-networkd[1492]: lo: Link UP Mar 19 11:35:23.162089 systemd-networkd[1492]: lo: Gained carrier Mar 19 11:35:23.164200 systemd-networkd[1492]: Enumeration completed Mar 19 11:35:23.164656 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:35:23.164744 systemd-networkd[1492]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:35:23.167678 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:35:23.174009 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:35:23.174907 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:35:23.182067 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:35:23.183932 systemd-resolved[1630]: Using system hostname 'ci-4230.1.0-a-4d5ab2e439'. Mar 19 11:35:23.203477 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:35:23.209615 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:35:23.215487 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:35:23.227257 kernel: mlx5_core 910e:00:02.0 enP37134s1: Link up Mar 19 11:35:23.231506 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:35:23.240657 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:35:23.249012 augenrules[1683]: /sbin/augenrules: No change Mar 19 11:35:23.249555 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:35:23.257832 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:35:23.257983 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:35:23.261376 augenrules[1705]: No rules Mar 19 11:35:23.270343 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 19 11:35:23.280146 kernel: hv_netvsc 002248b5-91ea-0022-48b5-91ea002248b5 eth0: Data path switched to VF: enP37134s1 Mar 19 11:35:23.280510 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 19 11:35:23.281409 systemd-networkd[1492]: enP37134s1: Link UP Mar 19 11:35:23.281513 systemd-networkd[1492]: eth0: Link UP Mar 19 11:35:23.281517 systemd-networkd[1492]: eth0: Gained carrier Mar 19 11:35:23.281533 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:35:23.290986 systemd-networkd[1492]: enP37134s1: Gained carrier Mar 19 11:35:23.291772 systemd[1]: Reached target time-set.target - System Time Set. Mar 19 11:35:23.304323 systemd-networkd[1492]: eth0: DHCPv4 address 10.200.20.43/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 19 11:35:23.304555 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:35:23.312588 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:35:23.313358 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:35:23.320563 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:35:23.320743 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:35:23.329602 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:35:23.329778 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:35:23.336607 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:35:23.336782 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:35:23.345023 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:35:23.345262 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:35:23.354612 systemd[1]: Finished ensure-sysext.service. Mar 19 11:35:23.366502 systemd[1]: Reached target network.target - Network. Mar 19 11:35:23.371706 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:35:23.379010 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:35:23.379260 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:35:23.401802 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 19 11:35:23.574306 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 19 11:35:23.581718 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 19 11:35:24.480457 systemd-networkd[1492]: eth0: Gained IPv6LL Mar 19 11:35:24.483196 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 19 11:35:24.490520 systemd[1]: Reached target network-online.target - Network is Online. Mar 19 11:35:25.120371 systemd-networkd[1492]: enP37134s1: Gained IPv6LL Mar 19 11:35:26.503794 ldconfig[1303]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 19 11:35:26.685624 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 19 11:35:26.697435 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 19 11:35:26.711169 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 19 11:35:26.717349 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:35:26.722995 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 19 11:35:26.729529 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 19 11:35:26.736324 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 19 11:35:26.742079 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 19 11:35:26.748821 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 19 11:35:26.755670 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 19 11:35:26.755706 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:35:26.760408 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:35:26.788971 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 19 11:35:26.796148 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 19 11:35:26.803524 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 19 11:35:26.810371 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 19 11:35:26.816956 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 19 11:35:26.825200 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 19 11:35:26.831088 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 19 11:35:26.838069 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 19 11:35:26.843779 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:35:26.848807 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:35:26.853752 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:35:26.853785 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:35:26.865516 systemd[1]: Starting chronyd.service - NTP client/server... Mar 19 11:35:26.877424 systemd[1]: Starting containerd.service - containerd container runtime... Mar 19 11:35:26.887910 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 19 11:35:26.895478 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 19 11:35:26.908148 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 19 11:35:26.916926 (chronyd)[1725]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 19 11:35:26.918456 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 19 11:35:26.926148 jq[1729]: false Mar 19 11:35:26.927803 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 19 11:35:26.927850 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 19 11:35:26.929062 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 19 11:35:26.935779 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 19 11:35:26.936919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:35:26.944563 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 19 11:35:26.952408 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 19 11:35:26.962828 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 19 11:35:26.969805 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 19 11:35:26.979460 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 19 11:35:26.988441 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 19 11:35:26.996406 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 19 11:35:26.996925 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 19 11:35:26.998446 systemd[1]: Starting update-engine.service - Update Engine... Mar 19 11:35:27.005819 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 19 11:35:27.020730 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 19 11:35:27.022332 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 19 11:35:27.024563 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 19 11:35:27.024752 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 19 11:35:27.035541 jq[1748]: true Mar 19 11:35:27.050320 jq[1754]: true Mar 19 11:35:27.081058 KVP[1734]: KVP starting; pid is:1734 Mar 19 11:35:27.085153 chronyd[1778]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 19 11:35:27.088260 KVP[1734]: KVP LIC Version: 3.1 Mar 19 11:35:27.090276 kernel: hv_utils: KVP IC version 4.0 Mar 19 11:35:27.104918 (ntainerd)[1780]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 19 11:35:27.109535 systemd[1]: motdgen.service: Deactivated successfully. Mar 19 11:35:27.109746 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 19 11:35:27.156020 extend-filesystems[1733]: Found loop4 Mar 19 11:35:27.156020 extend-filesystems[1733]: Found loop5 Mar 19 11:35:27.156020 extend-filesystems[1733]: Found loop6 Mar 19 11:35:27.156020 extend-filesystems[1733]: Found loop7 Mar 19 11:35:27.156020 extend-filesystems[1733]: Found sda Mar 19 11:35:27.156020 extend-filesystems[1733]: Found sda1 Mar 19 11:35:27.156020 extend-filesystems[1733]: Found sda2 Mar 19 11:35:27.156020 extend-filesystems[1733]: Found sda3 Mar 19 11:35:27.156020 extend-filesystems[1733]: Found usr Mar 19 11:35:27.156020 extend-filesystems[1733]: Found sda4 Mar 19 11:35:27.156020 extend-filesystems[1733]: Found sda6 Mar 19 11:35:27.156020 extend-filesystems[1733]: Found sda7 Mar 19 11:35:27.156020 extend-filesystems[1733]: Found sda9 Mar 19 11:35:27.156020 extend-filesystems[1733]: Checking size of /dev/sda9 Mar 19 11:35:27.265410 tar[1752]: linux-arm64/helm Mar 19 11:35:27.265646 update_engine[1747]: I20250319 11:35:27.201618 1747 main.cc:92] Flatcar Update Engine starting Mar 19 11:35:27.194605 systemd[1]: Started chronyd.service - NTP client/server. Mar 19 11:35:27.265898 extend-filesystems[1733]: Old size kept for /dev/sda9 Mar 19 11:35:27.265898 extend-filesystems[1733]: Found sr0 Mar 19 11:35:27.191013 chronyd[1778]: Timezone right/UTC failed leap second check, ignoring Mar 19 11:35:27.261005 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 19 11:35:27.191198 chronyd[1778]: Loaded seccomp filter (level 2) Mar 19 11:35:27.261197 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 19 11:35:27.397662 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1806) Mar 19 11:35:27.585190 dbus-daemon[1728]: [system] SELinux support is enabled Mar 19 11:35:27.620727 update_engine[1747]: I20250319 11:35:27.605253 1747 update_check_scheduler.cc:74] Next update check in 10m37s Mar 19 11:35:27.585460 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 19 11:35:27.594171 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 19 11:35:27.594192 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 19 11:35:27.604440 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 19 11:35:27.604461 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 19 11:35:27.613412 systemd-logind[1744]: New seat seat0. Mar 19 11:35:27.613894 systemd[1]: Started update-engine.service - Update Engine. Mar 19 11:35:27.621909 systemd-logind[1744]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 19 11:35:27.622349 systemd[1]: Started systemd-logind.service - User Login Management. Mar 19 11:35:27.646650 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 19 11:35:27.663468 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 19 11:35:27.700731 coreos-metadata[1727]: Mar 19 11:35:27.700 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 19 11:35:27.706731 coreos-metadata[1727]: Mar 19 11:35:27.706 INFO Fetch successful Mar 19 11:35:27.706731 coreos-metadata[1727]: Mar 19 11:35:27.706 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 19 11:35:27.712188 coreos-metadata[1727]: Mar 19 11:35:27.712 INFO Fetch successful Mar 19 11:35:27.714236 coreos-metadata[1727]: Mar 19 11:35:27.712 INFO Fetching http://168.63.129.16/machine/28ce8394-adba-4aa3-80fa-e83d09141bc3/00ea9d9b%2Dd956%2D449a%2Dbeca%2D9accc13c0b88.%5Fci%2D4230.1.0%2Da%2D4d5ab2e439?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 19 11:35:27.714236 coreos-metadata[1727]: Mar 19 11:35:27.713 INFO Fetch successful Mar 19 11:35:27.714236 coreos-metadata[1727]: Mar 19 11:35:27.714 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 19 11:35:27.727891 coreos-metadata[1727]: Mar 19 11:35:27.726 INFO Fetch successful Mar 19 11:35:27.759843 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 19 11:35:27.770887 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 19 11:35:27.776358 bash[1776]: Updated "/home/core/.ssh/authorized_keys" Mar 19 11:35:27.780761 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 19 11:35:27.790212 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 19 11:35:27.924770 sshd_keygen[1749]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 19 11:35:27.948763 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 19 11:35:27.962283 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 19 11:35:27.978537 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 19 11:35:27.989716 systemd[1]: issuegen.service: Deactivated successfully. Mar 19 11:35:27.989963 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 19 11:35:28.014703 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 19 11:35:28.030546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:35:28.039713 (kubelet)[1894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:35:28.047419 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 19 11:35:28.057193 tar[1752]: linux-arm64/LICENSE Mar 19 11:35:28.057942 tar[1752]: linux-arm64/README.md Mar 19 11:35:28.069064 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 19 11:35:28.396045 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 19 11:35:28.412606 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 19 11:35:28.425506 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 19 11:35:28.433125 systemd[1]: Reached target getty.target - Login Prompts. Mar 19 11:35:28.450926 kubelet[1894]: E0319 11:35:28.450881 1894 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:35:28.453103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:35:28.454126 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:35:28.456291 systemd[1]: kubelet.service: Consumed 676ms CPU time, 231M memory peak. Mar 19 11:35:28.462648 locksmithd[1857]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 19 11:35:28.928344 containerd[1780]: time="2025-03-19T11:35:28.928247360Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 19 11:35:28.954367 containerd[1780]: time="2025-03-19T11:35:28.954290880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:35:28.956880 containerd[1780]: time="2025-03-19T11:35:28.955726000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:35:28.956880 containerd[1780]: time="2025-03-19T11:35:28.955759200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 19 11:35:28.956880 containerd[1780]: time="2025-03-19T11:35:28.955782040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 19 11:35:28.956880 containerd[1780]: time="2025-03-19T11:35:28.955944880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 19 11:35:28.956880 containerd[1780]: time="2025-03-19T11:35:28.955960880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 19 11:35:28.956880 containerd[1780]: time="2025-03-19T11:35:28.956024520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:35:28.956880 containerd[1780]: time="2025-03-19T11:35:28.956036240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:35:28.956880 containerd[1780]: time="2025-03-19T11:35:28.956266800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:35:28.956880 containerd[1780]: time="2025-03-19T11:35:28.956283160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 19 11:35:28.956880 containerd[1780]: time="2025-03-19T11:35:28.956296240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:35:28.956880 containerd[1780]: time="2025-03-19T11:35:28.956305720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 19 11:35:28.957129 containerd[1780]: time="2025-03-19T11:35:28.956381800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:35:28.957129 containerd[1780]: time="2025-03-19T11:35:28.956578800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:35:28.957129 containerd[1780]: time="2025-03-19T11:35:28.956701640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:35:28.957129 containerd[1780]: time="2025-03-19T11:35:28.956713560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 19 11:35:28.957129 containerd[1780]: time="2025-03-19T11:35:28.956781160Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 19 11:35:28.957129 containerd[1780]: time="2025-03-19T11:35:28.956822240Z" level=info msg="metadata content store policy set" policy=shared Mar 19 11:35:29.467687 containerd[1780]: time="2025-03-19T11:35:29.467622400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 19 11:35:29.467849 containerd[1780]: time="2025-03-19T11:35:29.467719680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 19 11:35:29.467849 containerd[1780]: time="2025-03-19T11:35:29.467748600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 19 11:35:29.467849 containerd[1780]: time="2025-03-19T11:35:29.467775840Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 19 11:35:29.467849 containerd[1780]: time="2025-03-19T11:35:29.467801320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 19 11:35:29.468018 containerd[1780]: time="2025-03-19T11:35:29.467996560Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 19 11:35:29.468982 containerd[1780]: time="2025-03-19T11:35:29.468291760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 19 11:35:29.468982 containerd[1780]: time="2025-03-19T11:35:29.468450000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 19 11:35:29.468982 containerd[1780]: time="2025-03-19T11:35:29.468466640Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 19 11:35:29.468982 containerd[1780]: time="2025-03-19T11:35:29.468481120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 19 11:35:29.468982 containerd[1780]: time="2025-03-19T11:35:29.468495240Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 19 11:35:29.468982 containerd[1780]: time="2025-03-19T11:35:29.468510200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 19 11:35:29.468982 containerd[1780]: time="2025-03-19T11:35:29.468523400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 19 11:35:29.468982 containerd[1780]: time="2025-03-19T11:35:29.468537120Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 19 11:35:29.468982 containerd[1780]: time="2025-03-19T11:35:29.468551800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 19 11:35:29.468982 containerd[1780]: time="2025-03-19T11:35:29.468565960Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 19 11:35:29.468982 containerd[1780]: time="2025-03-19T11:35:29.468577800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 19 11:35:29.468982 containerd[1780]: time="2025-03-19T11:35:29.468589840Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 19 11:35:29.468982 containerd[1780]: time="2025-03-19T11:35:29.468646360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 19 11:35:29.468982 containerd[1780]: time="2025-03-19T11:35:29.468660600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 19 11:35:29.469371 containerd[1780]: time="2025-03-19T11:35:29.468672920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 19 11:35:29.469371 containerd[1780]: time="2025-03-19T11:35:29.468686080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 19 11:35:29.469371 containerd[1780]: time="2025-03-19T11:35:29.468698680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 19 11:35:29.469371 containerd[1780]: time="2025-03-19T11:35:29.468711560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 19 11:35:29.469371 containerd[1780]: time="2025-03-19T11:35:29.468724000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 19 11:35:29.469371 containerd[1780]: time="2025-03-19T11:35:29.468735920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 19 11:35:29.469371 containerd[1780]: time="2025-03-19T11:35:29.468747480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 19 11:35:29.469371 containerd[1780]: time="2025-03-19T11:35:29.468762080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 19 11:35:29.469371 containerd[1780]: time="2025-03-19T11:35:29.468774880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 19 11:35:29.469371 containerd[1780]: time="2025-03-19T11:35:29.468786000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 19 11:35:29.469371 containerd[1780]: time="2025-03-19T11:35:29.468797520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 19 11:35:29.469371 containerd[1780]: time="2025-03-19T11:35:29.468813320Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 19 11:35:29.469371 containerd[1780]: time="2025-03-19T11:35:29.468837000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 19 11:35:29.469371 containerd[1780]: time="2025-03-19T11:35:29.468857080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 19 11:35:29.469371 containerd[1780]: time="2025-03-19T11:35:29.468870080Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 19 11:35:29.469968 containerd[1780]: time="2025-03-19T11:35:29.468941280Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 19 11:35:29.469968 containerd[1780]: time="2025-03-19T11:35:29.468962960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 19 11:35:29.469968 containerd[1780]: time="2025-03-19T11:35:29.468973200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 19 11:35:29.469968 containerd[1780]: time="2025-03-19T11:35:29.468984720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 19 11:35:29.469968 containerd[1780]: time="2025-03-19T11:35:29.468994360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 19 11:35:29.469968 containerd[1780]: time="2025-03-19T11:35:29.469007080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 19 11:35:29.469968 containerd[1780]: time="2025-03-19T11:35:29.469016480Z" level=info msg="NRI interface is disabled by configuration." Mar 19 11:35:29.469968 containerd[1780]: time="2025-03-19T11:35:29.469027400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 19 11:35:29.470105 containerd[1780]: time="2025-03-19T11:35:29.469325160Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 19 11:35:29.470105 containerd[1780]: time="2025-03-19T11:35:29.469372600Z" level=info msg="Connect containerd service" Mar 19 11:35:29.470105 containerd[1780]: time="2025-03-19T11:35:29.469404960Z" level=info msg="using legacy CRI server" Mar 19 11:35:29.470105 containerd[1780]: time="2025-03-19T11:35:29.469411720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 19 11:35:29.470105 containerd[1780]: time="2025-03-19T11:35:29.469710560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 19 11:35:29.471284 containerd[1780]: time="2025-03-19T11:35:29.471245120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:35:29.472251 containerd[1780]: time="2025-03-19T11:35:29.471564480Z" level=info msg="Start subscribing containerd event" Mar 19 11:35:29.472251 containerd[1780]: time="2025-03-19T11:35:29.471620840Z" level=info msg="Start recovering state" Mar 19 11:35:29.472251 containerd[1780]: time="2025-03-19T11:35:29.471709120Z" level=info msg="Start event monitor" Mar 19 11:35:29.472251 containerd[1780]: time="2025-03-19T11:35:29.471720440Z" level=info msg="Start snapshots syncer" Mar 19 11:35:29.472251 containerd[1780]: time="2025-03-19T11:35:29.471730800Z" level=info msg="Start cni network conf syncer for default" Mar 19 11:35:29.472251 containerd[1780]: time="2025-03-19T11:35:29.471743960Z" level=info msg="Start streaming server" Mar 19 11:35:29.472251 containerd[1780]: time="2025-03-19T11:35:29.471938840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 19 11:35:29.472251 containerd[1780]: time="2025-03-19T11:35:29.471990600Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 19 11:35:29.472251 containerd[1780]: time="2025-03-19T11:35:29.472039760Z" level=info msg="containerd successfully booted in 0.545110s" Mar 19 11:35:29.472577 systemd[1]: Started containerd.service - containerd container runtime. Mar 19 11:35:29.479809 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 19 11:35:29.486943 systemd[1]: Startup finished in 714ms (kernel) + 13.420s (initrd) + 12.662s (userspace) = 26.797s. Mar 19 11:35:29.896648 login[1906]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 19 11:35:29.898124 login[1907]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:29.905617 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 19 11:35:29.913615 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 19 11:35:29.921577 systemd-logind[1744]: New session 1 of user core. Mar 19 11:35:29.927623 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 19 11:35:29.939637 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 19 11:35:29.942384 (systemd)[1928]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 19 11:35:29.944821 systemd-logind[1744]: New session c1 of user core. Mar 19 11:35:30.377002 systemd[1928]: Queued start job for default target default.target. Mar 19 11:35:30.386201 systemd[1928]: Created slice app.slice - User Application Slice. Mar 19 11:35:30.386699 systemd[1928]: Reached target paths.target - Paths. Mar 19 11:35:30.386829 systemd[1928]: Reached target timers.target - Timers. Mar 19 11:35:30.388146 systemd[1928]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 19 11:35:30.398855 systemd[1928]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 19 11:35:30.398953 systemd[1928]: Reached target sockets.target - Sockets. Mar 19 11:35:30.398987 systemd[1928]: Reached target basic.target - Basic System. Mar 19 11:35:30.399015 systemd[1928]: Reached target default.target - Main User Target. Mar 19 11:35:30.399039 systemd[1928]: Startup finished in 447ms. Mar 19 11:35:30.399756 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 19 11:35:30.410398 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 19 11:35:30.897089 login[1906]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:30.903349 systemd-logind[1744]: New session 2 of user core. Mar 19 11:35:30.916451 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 19 11:35:32.140264 waagent[1895]: 2025-03-19T11:35:32.136597Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 19 11:35:32.142609 waagent[1895]: 2025-03-19T11:35:32.142541Z INFO Daemon Daemon OS: flatcar 4230.1.0 Mar 19 11:35:32.146978 waagent[1895]: 2025-03-19T11:35:32.146911Z INFO Daemon Daemon Python: 3.11.11 Mar 19 11:35:32.151426 waagent[1895]: 2025-03-19T11:35:32.151340Z INFO Daemon Daemon Run daemon Mar 19 11:35:32.155397 waagent[1895]: 2025-03-19T11:35:32.155324Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.1.0' Mar 19 11:35:32.164462 waagent[1895]: 2025-03-19T11:35:32.164362Z INFO Daemon Daemon Using waagent for provisioning Mar 19 11:35:32.169599 waagent[1895]: 2025-03-19T11:35:32.169522Z INFO Daemon Daemon Activate resource disk Mar 19 11:35:32.174861 waagent[1895]: 2025-03-19T11:35:32.174768Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 19 11:35:32.187318 waagent[1895]: 2025-03-19T11:35:32.187216Z INFO Daemon Daemon Found device: None Mar 19 11:35:32.191907 waagent[1895]: 2025-03-19T11:35:32.191839Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 19 11:35:32.200018 waagent[1895]: 2025-03-19T11:35:32.199946Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 19 11:35:32.211150 waagent[1895]: 2025-03-19T11:35:32.211088Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 19 11:35:32.216647 waagent[1895]: 2025-03-19T11:35:32.216595Z INFO Daemon Daemon Running default provisioning handler Mar 19 11:35:32.228420 waagent[1895]: 2025-03-19T11:35:32.228352Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 19 11:35:32.241293 waagent[1895]: 2025-03-19T11:35:32.241209Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 19 11:35:32.250588 waagent[1895]: 2025-03-19T11:35:32.250524Z INFO Daemon Daemon cloud-init is enabled: False Mar 19 11:35:32.255642 waagent[1895]: 2025-03-19T11:35:32.255586Z INFO Daemon Daemon Copying ovf-env.xml Mar 19 11:35:33.892476 waagent[1895]: 2025-03-19T11:35:33.892375Z INFO Daemon Daemon Successfully mounted dvd Mar 19 11:35:33.910564 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 19 11:35:33.913258 waagent[1895]: 2025-03-19T11:35:33.912949Z INFO Daemon Daemon Detect protocol endpoint Mar 19 11:35:33.918330 waagent[1895]: 2025-03-19T11:35:33.918200Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 19 11:35:33.924937 waagent[1895]: 2025-03-19T11:35:33.924842Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 19 11:35:33.932084 waagent[1895]: 2025-03-19T11:35:33.931996Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 19 11:35:33.938256 waagent[1895]: 2025-03-19T11:35:33.938162Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 19 11:35:33.943272 waagent[1895]: 2025-03-19T11:35:33.943192Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 19 11:35:34.003464 waagent[1895]: 2025-03-19T11:35:34.003400Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 19 11:35:34.010511 waagent[1895]: 2025-03-19T11:35:34.010468Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 19 11:35:34.015872 waagent[1895]: 2025-03-19T11:35:34.015814Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 19 11:35:34.362871 waagent[1895]: 2025-03-19T11:35:34.362709Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 19 11:35:34.369600 waagent[1895]: 2025-03-19T11:35:34.369528Z INFO Daemon Daemon Forcing an update of the goal state. Mar 19 11:35:34.379377 waagent[1895]: 2025-03-19T11:35:34.379294Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 19 11:35:34.442445 waagent[1895]: 2025-03-19T11:35:34.442386Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Mar 19 11:35:34.448932 waagent[1895]: 2025-03-19T11:35:34.448872Z INFO Daemon Mar 19 11:35:34.453851 waagent[1895]: 2025-03-19T11:35:34.453767Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 65f462cf-bee0-409f-bd81-b3cd45a0bbe3 eTag: 7441661171669781701 source: Fabric] Mar 19 11:35:35.279281 waagent[1895]: 2025-03-19T11:35:35.279036Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 19 11:35:35.286212 waagent[1895]: 2025-03-19T11:35:35.286153Z INFO Daemon Mar 19 11:35:35.289166 waagent[1895]: 2025-03-19T11:35:35.289113Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 19 11:35:35.300448 waagent[1895]: 2025-03-19T11:35:35.300405Z INFO Daemon Daemon Downloading artifacts profile blob Mar 19 11:35:35.463309 waagent[1895]: 2025-03-19T11:35:35.462653Z INFO Daemon Downloaded certificate {'thumbprint': 'DF140C0F4B50F8696882898DD7705A0A4C72C180', 'hasPrivateKey': True} Mar 19 11:35:35.472693 waagent[1895]: 2025-03-19T11:35:35.472624Z INFO Daemon Downloaded certificate {'thumbprint': '0C616404AF2C2D376D573474A235391556947967', 'hasPrivateKey': False} Mar 19 11:35:35.483960 waagent[1895]: 2025-03-19T11:35:35.483902Z INFO Daemon Fetch goal state completed Mar 19 11:35:35.495987 waagent[1895]: 2025-03-19T11:35:35.495914Z INFO Daemon Daemon Starting provisioning Mar 19 11:35:35.501056 waagent[1895]: 2025-03-19T11:35:35.500987Z INFO Daemon Daemon Handle ovf-env.xml. Mar 19 11:35:35.505783 waagent[1895]: 2025-03-19T11:35:35.505727Z INFO Daemon Daemon Set hostname [ci-4230.1.0-a-4d5ab2e439] Mar 19 11:35:36.233254 waagent[1895]: 2025-03-19T11:35:36.232623Z INFO Daemon Daemon Publish hostname [ci-4230.1.0-a-4d5ab2e439] Mar 19 11:35:36.238970 waagent[1895]: 2025-03-19T11:35:36.238896Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 19 11:35:36.247592 waagent[1895]: 2025-03-19T11:35:36.247522Z INFO Daemon Daemon Primary interface is [eth0] Mar 19 11:35:36.260546 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:35:36.260555 systemd-networkd[1492]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:35:36.260583 systemd-networkd[1492]: eth0: DHCP lease lost Mar 19 11:35:36.262030 waagent[1895]: 2025-03-19T11:35:36.261947Z INFO Daemon Daemon Create user account if not exists Mar 19 11:35:36.267756 waagent[1895]: 2025-03-19T11:35:36.267692Z INFO Daemon Daemon User core already exists, skip useradd Mar 19 11:35:36.273566 waagent[1895]: 2025-03-19T11:35:36.273491Z INFO Daemon Daemon Configure sudoer Mar 19 11:35:36.278218 waagent[1895]: 2025-03-19T11:35:36.278144Z INFO Daemon Daemon Configure sshd Mar 19 11:35:36.282660 waagent[1895]: 2025-03-19T11:35:36.282599Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 19 11:35:36.295195 waagent[1895]: 2025-03-19T11:35:36.295121Z INFO Daemon Daemon Deploy ssh public key. Mar 19 11:35:36.310330 systemd-networkd[1492]: eth0: DHCPv4 address 10.200.20.43/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 19 11:35:37.432272 waagent[1895]: 2025-03-19T11:35:37.431463Z INFO Daemon Daemon Provisioning complete Mar 19 11:35:37.449330 waagent[1895]: 2025-03-19T11:35:37.449273Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 19 11:35:37.455865 waagent[1895]: 2025-03-19T11:35:37.455784Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 19 11:35:37.466145 waagent[1895]: 2025-03-19T11:35:37.466083Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 19 11:35:37.605810 waagent[1983]: 2025-03-19T11:35:37.605206Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 19 11:35:37.605810 waagent[1983]: 2025-03-19T11:35:37.605406Z INFO ExtHandler ExtHandler OS: flatcar 4230.1.0 Mar 19 11:35:37.605810 waagent[1983]: 2025-03-19T11:35:37.605462Z INFO ExtHandler ExtHandler Python: 3.11.11 Mar 19 11:35:38.621636 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 19 11:35:38.629496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:35:38.757051 waagent[1983]: 2025-03-19T11:35:38.756935Z INFO ExtHandler ExtHandler Distro: flatcar-4230.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 19 11:35:38.757380 waagent[1983]: 2025-03-19T11:35:38.757215Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 19 11:35:38.757380 waagent[1983]: 2025-03-19T11:35:38.757322Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 19 11:35:38.765981 waagent[1983]: 2025-03-19T11:35:38.765898Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 19 11:35:38.772957 waagent[1983]: 2025-03-19T11:35:38.772903Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Mar 19 11:35:38.773546 waagent[1983]: 2025-03-19T11:35:38.773494Z INFO ExtHandler Mar 19 11:35:38.773624 waagent[1983]: 2025-03-19T11:35:38.773593Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 8f5eef4b-4893-4b4d-b7b3-d280fec08bc4 eTag: 7441661171669781701 source: Fabric] Mar 19 11:35:38.773933 waagent[1983]: 2025-03-19T11:35:38.773889Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 19 11:35:38.901824 waagent[1983]: 2025-03-19T11:35:38.900600Z INFO ExtHandler Mar 19 11:35:38.901824 waagent[1983]: 2025-03-19T11:35:38.900819Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 19 11:35:38.906258 waagent[1983]: 2025-03-19T11:35:38.905551Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 19 11:35:43.244273 waagent[1983]: 2025-03-19T11:35:43.243466Z INFO ExtHandler Downloaded certificate {'thumbprint': 'DF140C0F4B50F8696882898DD7705A0A4C72C180', 'hasPrivateKey': True} Mar 19 11:35:43.244273 waagent[1983]: 2025-03-19T11:35:43.244008Z INFO ExtHandler Downloaded certificate {'thumbprint': '0C616404AF2C2D376D573474A235391556947967', 'hasPrivateKey': False} Mar 19 11:35:43.244623 waagent[1983]: 2025-03-19T11:35:43.244462Z INFO ExtHandler Fetch goal state completed Mar 19 11:35:43.260731 waagent[1983]: 2025-03-19T11:35:43.260667Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1983 Mar 19 11:35:43.260889 waagent[1983]: 2025-03-19T11:35:43.260852Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 19 11:35:43.263592 waagent[1983]: 2025-03-19T11:35:43.262559Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.1.0', '', 'Flatcar Container Linux by Kinvolk'] Mar 19 11:35:43.263592 waagent[1983]: 2025-03-19T11:35:43.262953Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 19 11:35:43.344938 waagent[1983]: 2025-03-19T11:35:43.344759Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 19 11:35:43.345064 waagent[1983]: 2025-03-19T11:35:43.344957Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 19 11:35:43.351185 waagent[1983]: 2025-03-19T11:35:43.351124Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 19 11:35:43.357913 systemd[1]: Reload requested from client PID 2004 ('systemctl') (unit waagent.service)... Mar 19 11:35:43.357941 systemd[1]: Reloading... Mar 19 11:35:43.460275 zram_generator::config[2047]: No configuration found. Mar 19 11:35:43.566404 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:35:43.675344 systemd[1]: Reloading finished in 317 ms. Mar 19 11:35:43.691809 waagent[1983]: 2025-03-19T11:35:43.688563Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 19 11:35:43.715352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:35:43.719686 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:35:43.722553 systemd[1]: Reload requested from client PID 2098 ('systemctl') (unit waagent.service)... Mar 19 11:35:43.722569 systemd[1]: Reloading... Mar 19 11:35:43.787939 kubelet[2101]: E0319 11:35:43.787850 2101 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:35:43.826320 zram_generator::config[2149]: No configuration found. Mar 19 11:35:43.942043 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:35:44.051075 systemd[1]: Reloading finished in 327 ms. Mar 19 11:35:44.066851 waagent[1983]: 2025-03-19T11:35:44.066773Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 19 11:35:44.068017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:35:44.069140 waagent[1983]: 2025-03-19T11:35:44.068373Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 19 11:35:44.068147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:35:44.068404 systemd[1]: kubelet.service: Consumed 129ms CPU time, 97.6M memory peak. Mar 19 11:35:44.290853 waagent[1983]: 2025-03-19T11:35:44.290727Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 19 11:35:44.291817 waagent[1983]: 2025-03-19T11:35:44.291747Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 19 11:35:44.292705 waagent[1983]: 2025-03-19T11:35:44.292653Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 19 11:35:44.292867 waagent[1983]: 2025-03-19T11:35:44.292783Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 19 11:35:44.293019 waagent[1983]: 2025-03-19T11:35:44.292968Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 19 11:35:44.293430 waagent[1983]: 2025-03-19T11:35:44.293369Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 19 11:35:44.293815 waagent[1983]: 2025-03-19T11:35:44.293708Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 19 11:35:44.293950 waagent[1983]: 2025-03-19T11:35:44.293877Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 19 11:35:44.294375 waagent[1983]: 2025-03-19T11:35:44.294319Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 19 11:35:44.294375 waagent[1983]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 19 11:35:44.294375 waagent[1983]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 19 11:35:44.294375 waagent[1983]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 19 11:35:44.294375 waagent[1983]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 19 11:35:44.294375 waagent[1983]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 19 11:35:44.294375 waagent[1983]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 19 11:35:44.294802 waagent[1983]: 2025-03-19T11:35:44.294735Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 19 11:35:44.294927 waagent[1983]: 2025-03-19T11:35:44.294875Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 19 11:35:44.295096 waagent[1983]: 2025-03-19T11:35:44.295048Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 19 11:35:44.295786 waagent[1983]: 2025-03-19T11:35:44.295424Z INFO EnvHandler ExtHandler Configure routes Mar 19 11:35:44.295928 waagent[1983]: 2025-03-19T11:35:44.295833Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 19 11:35:44.296089 waagent[1983]: 2025-03-19T11:35:44.296034Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 19 11:35:44.296165 waagent[1983]: 2025-03-19T11:35:44.296103Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 19 11:35:44.296581 waagent[1983]: 2025-03-19T11:35:44.296476Z INFO EnvHandler ExtHandler Gateway:None Mar 19 11:35:44.297517 waagent[1983]: 2025-03-19T11:35:44.297479Z INFO EnvHandler ExtHandler Routes:None Mar 19 11:35:44.309901 waagent[1983]: 2025-03-19T11:35:44.309852Z INFO ExtHandler ExtHandler Mar 19 11:35:44.311251 waagent[1983]: 2025-03-19T11:35:44.310078Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a6aa19fa-9c63-40e3-8060-aa86b07b38fc correlation e98f973a-1cf1-4b86-b802-a8dc0767c4bd created: 2025-03-19T11:34:06.050148Z] Mar 19 11:35:44.311251 waagent[1983]: 2025-03-19T11:35:44.310473Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 19 11:35:44.311251 waagent[1983]: 2025-03-19T11:35:44.311029Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Mar 19 11:35:44.333813 waagent[1983]: 2025-03-19T11:35:44.333746Z INFO MonitorHandler ExtHandler Network interfaces: Mar 19 11:35:44.333813 waagent[1983]: Executing ['ip', '-a', '-o', 'link']: Mar 19 11:35:44.333813 waagent[1983]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 19 11:35:44.333813 waagent[1983]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b5:91:ea brd ff:ff:ff:ff:ff:ff Mar 19 11:35:44.333813 waagent[1983]: 3: enP37134s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b5:91:ea brd ff:ff:ff:ff:ff:ff\ altname enP37134p0s2 Mar 19 11:35:44.333813 waagent[1983]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 19 11:35:44.333813 waagent[1983]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 19 11:35:44.333813 waagent[1983]: 2: eth0 inet 10.200.20.43/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 19 11:35:44.333813 waagent[1983]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 19 11:35:44.333813 waagent[1983]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 19 11:35:44.333813 waagent[1983]: 2: eth0 inet6 fe80::222:48ff:feb5:91ea/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 19 11:35:44.333813 waagent[1983]: 3: enP37134s1 inet6 fe80::222:48ff:feb5:91ea/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 19 11:35:44.371482 waagent[1983]: 2025-03-19T11:35:44.371416Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A2093F30-0952-4258-B842-D097A1460300;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 19 11:35:44.426509 waagent[1983]: 2025-03-19T11:35:44.426431Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 19 11:35:44.426509 waagent[1983]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:35:44.426509 waagent[1983]: pkts bytes target prot opt in out source destination Mar 19 11:35:44.426509 waagent[1983]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:35:44.426509 waagent[1983]: pkts bytes target prot opt in out source destination Mar 19 11:35:44.426509 waagent[1983]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:35:44.426509 waagent[1983]: pkts bytes target prot opt in out source destination Mar 19 11:35:44.426509 waagent[1983]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 19 11:35:44.426509 waagent[1983]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 19 11:35:44.426509 waagent[1983]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 19 11:35:44.429571 waagent[1983]: 2025-03-19T11:35:44.429509Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 19 11:35:44.429571 waagent[1983]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:35:44.429571 waagent[1983]: pkts bytes target prot opt in out source destination Mar 19 11:35:44.429571 waagent[1983]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:35:44.429571 waagent[1983]: pkts bytes target prot opt in out source destination Mar 19 11:35:44.429571 waagent[1983]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 19 11:35:44.429571 waagent[1983]: pkts bytes target prot opt in out source destination Mar 19 11:35:44.429571 waagent[1983]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 19 11:35:44.429571 waagent[1983]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 19 11:35:44.429571 waagent[1983]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 19 11:35:44.429814 waagent[1983]: 2025-03-19T11:35:44.429776Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 19 11:35:50.994817 chronyd[1778]: Selected source PHC0 Mar 19 11:35:54.121705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 19 11:35:54.132468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:35:54.226503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:35:54.230348 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:35:54.283634 kubelet[2244]: E0319 11:35:54.283586 2244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:35:54.286103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:35:54.286287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:35:54.286604 systemd[1]: kubelet.service: Consumed 113ms CPU time, 96.5M memory peak. Mar 19 11:36:01.631297 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 19 11:36:01.635478 systemd[1]: Started sshd@0-10.200.20.43:22-10.200.16.10:38948.service - OpenSSH per-connection server daemon (10.200.16.10:38948). Mar 19 11:36:02.256193 sshd[2252]: Accepted publickey for core from 10.200.16.10 port 38948 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:36:02.257520 sshd-session[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:02.262408 systemd-logind[1744]: New session 3 of user core. Mar 19 11:36:02.269403 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 19 11:36:02.671115 systemd[1]: Started sshd@1-10.200.20.43:22-10.200.16.10:38958.service - OpenSSH per-connection server daemon (10.200.16.10:38958). Mar 19 11:36:03.120264 sshd[2257]: Accepted publickey for core from 10.200.16.10 port 38958 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:36:03.121501 sshd-session[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:03.127394 systemd-logind[1744]: New session 4 of user core. Mar 19 11:36:03.133380 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 19 11:36:03.442947 sshd[2259]: Connection closed by 10.200.16.10 port 38958 Mar 19 11:36:03.443743 sshd-session[2257]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:03.446513 systemd[1]: sshd@1-10.200.20.43:22-10.200.16.10:38958.service: Deactivated successfully. Mar 19 11:36:03.448119 systemd[1]: session-4.scope: Deactivated successfully. Mar 19 11:36:03.450397 systemd-logind[1744]: Session 4 logged out. Waiting for processes to exit. Mar 19 11:36:03.451671 systemd-logind[1744]: Removed session 4. Mar 19 11:36:03.533471 systemd[1]: Started sshd@2-10.200.20.43:22-10.200.16.10:38964.service - OpenSSH per-connection server daemon (10.200.16.10:38964). Mar 19 11:36:03.978972 sshd[2265]: Accepted publickey for core from 10.200.16.10 port 38964 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:36:03.980297 sshd-session[2265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:03.985906 systemd-logind[1744]: New session 5 of user core. Mar 19 11:36:03.991395 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 19 11:36:04.299328 sshd[2267]: Connection closed by 10.200.16.10 port 38964 Mar 19 11:36:04.299821 sshd-session[2265]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:04.303884 systemd[1]: sshd@2-10.200.20.43:22-10.200.16.10:38964.service: Deactivated successfully. Mar 19 11:36:04.305649 systemd[1]: session-5.scope: Deactivated successfully. Mar 19 11:36:04.306924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 19 11:36:04.309372 systemd-logind[1744]: Session 5 logged out. Waiting for processes to exit. Mar 19 11:36:04.315416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:36:04.316995 systemd-logind[1744]: Removed session 5. Mar 19 11:36:04.384764 systemd[1]: Started sshd@3-10.200.20.43:22-10.200.16.10:38970.service - OpenSSH per-connection server daemon (10.200.16.10:38970). Mar 19 11:36:04.401538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:36:04.409778 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:36:04.494728 kubelet[2281]: E0319 11:36:04.494672 2281 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:36:04.497635 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:36:04.497775 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:36:04.498261 systemd[1]: kubelet.service: Consumed 119ms CPU time, 94.3M memory peak. Mar 19 11:36:04.840115 sshd[2276]: Accepted publickey for core from 10.200.16.10 port 38970 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:36:04.841421 sshd-session[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:04.846140 systemd-logind[1744]: New session 6 of user core. Mar 19 11:36:04.855413 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 19 11:36:05.162966 sshd[2290]: Connection closed by 10.200.16.10 port 38970 Mar 19 11:36:05.163552 sshd-session[2276]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:05.167145 systemd[1]: sshd@3-10.200.20.43:22-10.200.16.10:38970.service: Deactivated successfully. Mar 19 11:36:05.168950 systemd[1]: session-6.scope: Deactivated successfully. Mar 19 11:36:05.169659 systemd-logind[1744]: Session 6 logged out. Waiting for processes to exit. Mar 19 11:36:05.170590 systemd-logind[1744]: Removed session 6. Mar 19 11:36:05.249508 systemd[1]: Started sshd@4-10.200.20.43:22-10.200.16.10:38976.service - OpenSSH per-connection server daemon (10.200.16.10:38976). Mar 19 11:36:05.693665 sshd[2296]: Accepted publickey for core from 10.200.16.10 port 38976 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:36:05.694891 sshd-session[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:05.700374 systemd-logind[1744]: New session 7 of user core. Mar 19 11:36:05.707377 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 19 11:36:06.098203 sudo[2299]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 19 11:36:06.098513 sudo[2299]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:36:07.195506 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 19 11:36:07.195642 (dockerd)[2316]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 19 11:36:08.013820 dockerd[2316]: time="2025-03-19T11:36:08.013757257Z" level=info msg="Starting up" Mar 19 11:36:08.380052 dockerd[2316]: time="2025-03-19T11:36:08.380006792Z" level=info msg="Loading containers: start." Mar 19 11:36:08.592269 kernel: Initializing XFRM netlink socket Mar 19 11:36:08.740719 systemd-networkd[1492]: docker0: Link UP Mar 19 11:36:08.785555 dockerd[2316]: time="2025-03-19T11:36:08.785502346Z" level=info msg="Loading containers: done." Mar 19 11:36:08.814721 dockerd[2316]: time="2025-03-19T11:36:08.814661640Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 19 11:36:08.814913 dockerd[2316]: time="2025-03-19T11:36:08.814775920Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 19 11:36:08.814913 dockerd[2316]: time="2025-03-19T11:36:08.814906080Z" level=info msg="Daemon has completed initialization" Mar 19 11:36:08.888081 dockerd[2316]: time="2025-03-19T11:36:08.887955755Z" level=info msg="API listen on /run/docker.sock" Mar 19 11:36:08.888607 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 19 11:36:10.000877 containerd[1780]: time="2025-03-19T11:36:10.000824528Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 19 11:36:10.284249 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 19 11:36:11.151919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1334261628.mount: Deactivated successfully. Mar 19 11:36:12.693581 update_engine[1747]: I20250319 11:36:12.693279 1747 update_attempter.cc:509] Updating boot flags... Mar 19 11:36:12.792249 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2568) Mar 19 11:36:12.967249 containerd[1780]: time="2025-03-19T11:36:12.967090107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:12.971811 containerd[1780]: time="2025-03-19T11:36:12.971766310Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=25552766" Mar 19 11:36:12.980639 containerd[1780]: time="2025-03-19T11:36:12.980547714Z" level=info msg="ImageCreate event name:\"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:12.986255 containerd[1780]: time="2025-03-19T11:36:12.986005836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:12.987754 containerd[1780]: time="2025-03-19T11:36:12.987258517Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"25549566\" in 2.986387389s" Mar 19 11:36:12.987754 containerd[1780]: time="2025-03-19T11:36:12.987295037Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\"" Mar 19 11:36:12.988661 containerd[1780]: time="2025-03-19T11:36:12.988631798Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 19 11:36:14.621579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 19 11:36:14.631750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:36:14.741387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:36:14.747122 (kubelet)[2627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:36:14.790100 kubelet[2627]: E0319 11:36:14.789630 2627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:36:14.792740 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:36:14.792882 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:36:14.794466 systemd[1]: kubelet.service: Consumed 123ms CPU time, 94.2M memory peak. Mar 19 11:36:15.130276 containerd[1780]: time="2025-03-19T11:36:15.129194088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:15.142947 containerd[1780]: time="2025-03-19T11:36:15.142890658Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=22458978" Mar 19 11:36:15.149332 containerd[1780]: time="2025-03-19T11:36:15.149272942Z" level=info msg="ImageCreate event name:\"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:15.163258 containerd[1780]: time="2025-03-19T11:36:15.161005591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:15.164137 containerd[1780]: time="2025-03-19T11:36:15.164095873Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"23899774\" in 2.175422275s" Mar 19 11:36:15.164258 containerd[1780]: time="2025-03-19T11:36:15.164240913Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\"" Mar 19 11:36:15.165815 containerd[1780]: time="2025-03-19T11:36:15.165775114Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 19 11:36:17.320278 containerd[1780]: time="2025-03-19T11:36:17.319941293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:17.323065 containerd[1780]: time="2025-03-19T11:36:17.323026935Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=17125829" Mar 19 11:36:17.328233 containerd[1780]: time="2025-03-19T11:36:17.328194859Z" level=info msg="ImageCreate event name:\"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:17.335014 containerd[1780]: time="2025-03-19T11:36:17.334951344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:17.336089 containerd[1780]: time="2025-03-19T11:36:17.335944824Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"18566643\" in 2.17005771s" Mar 19 11:36:17.336089 containerd[1780]: time="2025-03-19T11:36:17.335981904Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\"" Mar 19 11:36:17.336875 containerd[1780]: time="2025-03-19T11:36:17.336721505Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 19 11:36:18.614264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1593629009.mount: Deactivated successfully. Mar 19 11:36:19.013871 containerd[1780]: time="2025-03-19T11:36:19.013812414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:19.018554 containerd[1780]: time="2025-03-19T11:36:19.018516617Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=26871915" Mar 19 11:36:19.022437 containerd[1780]: time="2025-03-19T11:36:19.022372980Z" level=info msg="ImageCreate event name:\"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:19.029381 containerd[1780]: time="2025-03-19T11:36:19.029343985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:19.030099 containerd[1780]: time="2025-03-19T11:36:19.029888386Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"26870934\" in 1.693134081s" Mar 19 11:36:19.030099 containerd[1780]: time="2025-03-19T11:36:19.029926666Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 19 11:36:19.030952 containerd[1780]: time="2025-03-19T11:36:19.030837746Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 19 11:36:19.767956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2782300865.mount: Deactivated successfully. Mar 19 11:36:21.700278 containerd[1780]: time="2025-03-19T11:36:21.699271633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:21.706379 containerd[1780]: time="2025-03-19T11:36:21.706088637Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Mar 19 11:36:21.712737 containerd[1780]: time="2025-03-19T11:36:21.712680280Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:21.723700 containerd[1780]: time="2025-03-19T11:36:21.723609446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:21.725019 containerd[1780]: time="2025-03-19T11:36:21.724881007Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.694009901s" Mar 19 11:36:21.725019 containerd[1780]: time="2025-03-19T11:36:21.724918767Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 19 11:36:21.725719 containerd[1780]: time="2025-03-19T11:36:21.725534487Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 19 11:36:22.649937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1065666397.mount: Deactivated successfully. Mar 19 11:36:22.688290 containerd[1780]: time="2025-03-19T11:36:22.687578099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:22.694723 containerd[1780]: time="2025-03-19T11:36:22.694500582Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 19 11:36:22.706264 containerd[1780]: time="2025-03-19T11:36:22.705108028Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:22.713981 containerd[1780]: time="2025-03-19T11:36:22.713029993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:22.713981 containerd[1780]: time="2025-03-19T11:36:22.713847953Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 988.283386ms" Mar 19 11:36:22.713981 containerd[1780]: time="2025-03-19T11:36:22.713879593Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 19 11:36:22.714688 containerd[1780]: time="2025-03-19T11:36:22.714530353Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 19 11:36:23.555255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1380646517.mount: Deactivated successfully. Mar 19 11:36:24.871633 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 19 11:36:24.882407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:36:24.989407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:36:24.989607 (kubelet)[2752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:36:25.023081 kubelet[2752]: E0319 11:36:25.022975 2752 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:36:25.025437 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:36:25.025698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:36:25.026356 systemd[1]: kubelet.service: Consumed 117ms CPU time, 94.1M memory peak. Mar 19 11:36:26.828280 containerd[1780]: time="2025-03-19T11:36:26.827395556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:26.876352 containerd[1780]: time="2025-03-19T11:36:26.876289347Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Mar 19 11:36:26.881391 containerd[1780]: time="2025-03-19T11:36:26.881335390Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:26.927144 containerd[1780]: time="2025-03-19T11:36:26.927054059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:36:26.928872 containerd[1780]: time="2025-03-19T11:36:26.928714500Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.213880026s" Mar 19 11:36:26.928872 containerd[1780]: time="2025-03-19T11:36:26.928754220Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Mar 19 11:36:34.457325 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:36:34.457474 systemd[1]: kubelet.service: Consumed 117ms CPU time, 94.1M memory peak. Mar 19 11:36:34.465480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:36:34.576191 systemd[1]: Reload requested from client PID 2791 ('systemctl') (unit session-7.scope)... Mar 19 11:36:34.576390 systemd[1]: Reloading... Mar 19 11:36:34.671414 zram_generator::config[2838]: No configuration found. Mar 19 11:36:34.781628 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:36:34.896127 systemd[1]: Reloading finished in 319 ms. Mar 19 11:36:35.032983 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 19 11:36:35.033067 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 19 11:36:35.033358 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:36:35.040529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:36:40.918041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:36:40.927524 (kubelet)[2902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:36:40.965756 kubelet[2902]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:36:40.966124 kubelet[2902]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 11:36:40.966170 kubelet[2902]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:36:40.966317 kubelet[2902]: I0319 11:36:40.966286 2902 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:36:41.973416 kubelet[2902]: I0319 11:36:41.973370 2902 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 19 11:36:41.973416 kubelet[2902]: I0319 11:36:41.973406 2902 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:36:41.973803 kubelet[2902]: I0319 11:36:41.973652 2902 server.go:929] "Client rotation is on, will bootstrap in background" Mar 19 11:36:41.990966 kubelet[2902]: E0319 11:36:41.990922 2902 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:36:41.991812 kubelet[2902]: I0319 11:36:41.991694 2902 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:36:42.003659 kubelet[2902]: E0319 11:36:42.003618 2902 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:36:42.003659 kubelet[2902]: I0319 11:36:42.003651 2902 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:36:42.007539 kubelet[2902]: I0319 11:36:42.007513 2902 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:36:42.008264 kubelet[2902]: I0319 11:36:42.008216 2902 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 19 11:36:42.008438 kubelet[2902]: I0319 11:36:42.008403 2902 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:36:42.008617 kubelet[2902]: I0319 11:36:42.008437 2902 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.0-a-4d5ab2e439","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:36:42.008703 kubelet[2902]: I0319 11:36:42.008624 2902 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:36:42.008703 kubelet[2902]: I0319 11:36:42.008635 2902 container_manager_linux.go:300] "Creating device plugin manager" Mar 19 11:36:42.008780 kubelet[2902]: I0319 11:36:42.008760 2902 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:36:42.010668 kubelet[2902]: I0319 11:36:42.010251 2902 kubelet.go:408] "Attempting to sync node with API server" Mar 19 11:36:42.010668 kubelet[2902]: I0319 11:36:42.010284 2902 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:36:42.010668 kubelet[2902]: I0319 11:36:42.010312 2902 kubelet.go:314] "Adding apiserver pod source" Mar 19 11:36:42.010668 kubelet[2902]: I0319 11:36:42.010322 2902 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:36:42.012464 kubelet[2902]: W0319 11:36:42.012188 2902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-4d5ab2e439&limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Mar 19 11:36:42.012613 kubelet[2902]: E0319 11:36:42.012589 2902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-4d5ab2e439&limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:36:42.012854 kubelet[2902]: I0319 11:36:42.012839 2902 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:36:42.014435 kubelet[2902]: I0319 11:36:42.014414 2902 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:36:42.015376 kubelet[2902]: W0319 11:36:42.015360 2902 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 19 11:36:42.016941 kubelet[2902]: I0319 11:36:42.016815 2902 server.go:1269] "Started kubelet" Mar 19 11:36:42.018088 kubelet[2902]: W0319 11:36:42.017789 2902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Mar 19 11:36:42.018088 kubelet[2902]: E0319 11:36:42.017838 2902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:36:42.018088 kubelet[2902]: I0319 11:36:42.017925 2902 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:36:42.018829 kubelet[2902]: I0319 11:36:42.018800 2902 server.go:460] "Adding debug handlers to kubelet server" Mar 19 11:36:42.020317 kubelet[2902]: I0319 11:36:42.019547 2902 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:36:42.020317 kubelet[2902]: I0319 11:36:42.019845 2902 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:36:42.021015 kubelet[2902]: I0319 11:36:42.020976 2902 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:36:42.022514 kubelet[2902]: E0319 11:36:42.021505 2902 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.43:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.0-a-4d5ab2e439.182e31333f768c96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-a-4d5ab2e439,UID:ci-4230.1.0-a-4d5ab2e439,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-a-4d5ab2e439,},FirstTimestamp:2025-03-19 11:36:42.016787606 +0000 UTC m=+1.085815798,LastTimestamp:2025-03-19 11:36:42.016787606 +0000 UTC m=+1.085815798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-a-4d5ab2e439,}" Mar 19 11:36:42.023036 kubelet[2902]: I0319 11:36:42.023016 2902 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:36:42.025328 kubelet[2902]: I0319 11:36:42.025302 2902 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 19 11:36:42.025959 kubelet[2902]: E0319 11:36:42.025531 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:42.025959 kubelet[2902]: I0319 11:36:42.025792 2902 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 19 11:36:42.025959 kubelet[2902]: I0319 11:36:42.025849 2902 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:36:42.025959 kubelet[2902]: E0319 11:36:42.025914 2902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-4d5ab2e439?timeout=10s\": dial tcp 10.200.20.43:6443: connect: connection refused" interval="200ms" Mar 19 11:36:42.026289 kubelet[2902]: W0319 11:36:42.026177 2902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Mar 19 11:36:42.027214 kubelet[2902]: E0319 11:36:42.027160 2902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:36:42.027301 kubelet[2902]: E0319 11:36:42.027288 2902 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:36:42.027536 kubelet[2902]: I0319 11:36:42.027517 2902 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:36:42.027615 kubelet[2902]: I0319 11:36:42.027594 2902 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:36:42.029507 kubelet[2902]: I0319 11:36:42.029460 2902 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:36:42.058130 kubelet[2902]: I0319 11:36:42.058093 2902 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 19 11:36:42.058130 kubelet[2902]: I0319 11:36:42.058112 2902 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 19 11:36:42.058130 kubelet[2902]: I0319 11:36:42.058131 2902 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:36:42.125648 kubelet[2902]: E0319 11:36:42.125601 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:42.227173 kubelet[2902]: E0319 11:36:42.225965 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:42.227173 kubelet[2902]: E0319 11:36:42.227122 2902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-4d5ab2e439?timeout=10s\": dial tcp 10.200.20.43:6443: connect: connection refused" interval="400ms" Mar 19 11:36:42.326304 kubelet[2902]: E0319 11:36:42.326266 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:42.426648 kubelet[2902]: E0319 11:36:42.426599 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:42.527673 kubelet[2902]: E0319 11:36:42.527534 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:42.627929 kubelet[2902]: E0319 11:36:42.627892 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:42.628300 kubelet[2902]: E0319 11:36:42.628248 2902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-4d5ab2e439?timeout=10s\": dial tcp 10.200.20.43:6443: connect: connection refused" interval="800ms" Mar 19 11:36:42.728779 kubelet[2902]: E0319 11:36:42.728748 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:44.137247 kubelet[2902]: E0319 11:36:42.829283 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:44.137247 kubelet[2902]: E0319 11:36:42.929744 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:44.137247 kubelet[2902]: E0319 11:36:43.030215 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:44.137247 kubelet[2902]: E0319 11:36:43.130804 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:44.137247 kubelet[2902]: E0319 11:36:43.231433 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:44.137247 kubelet[2902]: W0319 11:36:43.289092 2902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-4d5ab2e439&limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Mar 19 11:36:44.137247 kubelet[2902]: E0319 11:36:43.289154 2902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-4d5ab2e439&limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:36:44.137247 kubelet[2902]: E0319 11:36:43.331543 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:44.137247 kubelet[2902]: E0319 11:36:43.429418 2902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-4d5ab2e439?timeout=10s\": dial tcp 10.200.20.43:6443: connect: connection refused" interval="1.6s" Mar 19 11:36:44.137247 kubelet[2902]: E0319 11:36:43.432664 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:44.137773 kubelet[2902]: W0319 11:36:43.500519 2902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Mar 19 11:36:44.137773 kubelet[2902]: E0319 11:36:43.500586 2902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:36:44.137773 kubelet[2902]: W0319 11:36:43.503200 2902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Mar 19 11:36:44.137773 kubelet[2902]: E0319 11:36:43.503273 2902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:36:44.137773 kubelet[2902]: E0319 11:36:43.533116 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:44.137773 kubelet[2902]: E0319 11:36:43.633669 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:44.137773 kubelet[2902]: E0319 11:36:43.734138 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:44.137773 kubelet[2902]: E0319 11:36:43.834609 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:44.137773 kubelet[2902]: E0319 11:36:43.935018 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:44.137956 kubelet[2902]: E0319 11:36:44.036037 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:44.137956 kubelet[2902]: E0319 11:36:44.068029 2902 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:36:44.137956 kubelet[2902]: E0319 11:36:44.136492 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:44.154618 kubelet[2902]: I0319 11:36:44.154460 2902 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:36:44.180956 kubelet[2902]: I0319 11:36:44.156822 2902 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:36:44.180956 kubelet[2902]: I0319 11:36:44.156850 2902 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 11:36:44.180956 kubelet[2902]: I0319 11:36:44.156872 2902 kubelet.go:2321] "Starting kubelet main sync loop" Mar 19 11:36:44.180956 kubelet[2902]: E0319 11:36:44.156918 2902 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:36:44.180956 kubelet[2902]: W0319 11:36:44.160953 2902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Mar 19 11:36:44.180956 kubelet[2902]: E0319 11:36:44.161019 2902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:36:44.228876 kubelet[2902]: I0319 11:36:44.228844 2902 policy_none.go:49] "None policy: Start" Mar 19 11:36:44.229775 kubelet[2902]: I0319 11:36:44.229745 2902 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 11:36:44.230182 kubelet[2902]: I0319 11:36:44.229873 2902 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:36:44.236943 kubelet[2902]: E0319 11:36:44.236908 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:44.257185 kubelet[2902]: E0319 11:36:44.257142 2902 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 19 11:36:44.337184 kubelet[2902]: E0319 11:36:44.337135 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:44.338009 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 19 11:36:44.356347 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 19 11:36:44.359511 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 19 11:36:44.371556 kubelet[2902]: I0319 11:36:44.371024 2902 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:36:44.371556 kubelet[2902]: I0319 11:36:44.371278 2902 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:36:44.371556 kubelet[2902]: I0319 11:36:44.371292 2902 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:36:44.371742 kubelet[2902]: I0319 11:36:44.371590 2902 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:36:44.374401 kubelet[2902]: E0319 11:36:44.374313 2902 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:44.470050 systemd[1]: Created slice kubepods-burstable-pod742a5fbe514c4447d0c72c42be2a8bd7.slice - libcontainer container kubepods-burstable-pod742a5fbe514c4447d0c72c42be2a8bd7.slice. Mar 19 11:36:44.473727 kubelet[2902]: I0319 11:36:44.473691 2902 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:44.474291 kubelet[2902]: E0319 11:36:44.474266 2902 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.43:6443/api/v1/nodes\": dial tcp 10.200.20.43:6443: connect: connection refused" node="ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:44.489648 systemd[1]: Created slice kubepods-burstable-podf8e2901140b1ca31e4f9b73fe4e8d683.slice - libcontainer container kubepods-burstable-podf8e2901140b1ca31e4f9b73fe4e8d683.slice. Mar 19 11:36:44.503044 systemd[1]: Created slice kubepods-burstable-pod9825025e7b619f7d7d0b27d35409b8dc.slice - libcontainer container kubepods-burstable-pod9825025e7b619f7d7d0b27d35409b8dc.slice. Mar 19 11:36:44.538195 kubelet[2902]: I0319 11:36:44.537951 2902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9825025e7b619f7d7d0b27d35409b8dc-kubeconfig\") pod \"kube-scheduler-ci-4230.1.0-a-4d5ab2e439\" (UID: \"9825025e7b619f7d7d0b27d35409b8dc\") " pod="kube-system/kube-scheduler-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:44.538195 kubelet[2902]: I0319 11:36:44.537991 2902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/742a5fbe514c4447d0c72c42be2a8bd7-ca-certs\") pod \"kube-apiserver-ci-4230.1.0-a-4d5ab2e439\" (UID: \"742a5fbe514c4447d0c72c42be2a8bd7\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:44.538195 kubelet[2902]: I0319 11:36:44.538008 2902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/742a5fbe514c4447d0c72c42be2a8bd7-k8s-certs\") pod \"kube-apiserver-ci-4230.1.0-a-4d5ab2e439\" (UID: \"742a5fbe514c4447d0c72c42be2a8bd7\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:44.538195 kubelet[2902]: I0319 11:36:44.538037 2902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8e2901140b1ca31e4f9b73fe4e8d683-ca-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-4d5ab2e439\" (UID: \"f8e2901140b1ca31e4f9b73fe4e8d683\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:44.538195 kubelet[2902]: I0319 11:36:44.538058 2902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8e2901140b1ca31e4f9b73fe4e8d683-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-4d5ab2e439\" (UID: \"f8e2901140b1ca31e4f9b73fe4e8d683\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:44.538440 kubelet[2902]: I0319 11:36:44.538074 2902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8e2901140b1ca31e4f9b73fe4e8d683-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.0-a-4d5ab2e439\" (UID: \"f8e2901140b1ca31e4f9b73fe4e8d683\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:44.538440 kubelet[2902]: I0319 11:36:44.538089 2902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/742a5fbe514c4447d0c72c42be2a8bd7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.0-a-4d5ab2e439\" (UID: \"742a5fbe514c4447d0c72c42be2a8bd7\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:44.538440 kubelet[2902]: I0319 11:36:44.538126 2902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8e2901140b1ca31e4f9b73fe4e8d683-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.0-a-4d5ab2e439\" (UID: \"f8e2901140b1ca31e4f9b73fe4e8d683\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:44.538440 kubelet[2902]: I0319 11:36:44.538141 2902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8e2901140b1ca31e4f9b73fe4e8d683-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.0-a-4d5ab2e439\" (UID: \"f8e2901140b1ca31e4f9b73fe4e8d683\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:44.676550 kubelet[2902]: I0319 11:36:44.676288 2902 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:44.676890 kubelet[2902]: E0319 11:36:44.676862 2902 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.43:6443/api/v1/nodes\": dial tcp 10.200.20.43:6443: connect: connection refused" node="ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:44.788325 containerd[1780]: time="2025-03-19T11:36:44.788190553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.0-a-4d5ab2e439,Uid:742a5fbe514c4447d0c72c42be2a8bd7,Namespace:kube-system,Attempt:0,}" Mar 19 11:36:44.792898 containerd[1780]: time="2025-03-19T11:36:44.792691156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.0-a-4d5ab2e439,Uid:f8e2901140b1ca31e4f9b73fe4e8d683,Namespace:kube-system,Attempt:0,}" Mar 19 11:36:44.806836 containerd[1780]: time="2025-03-19T11:36:44.806780685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.0-a-4d5ab2e439,Uid:9825025e7b619f7d7d0b27d35409b8dc,Namespace:kube-system,Attempt:0,}" Mar 19 11:36:45.030437 kubelet[2902]: E0319 11:36:45.030390 2902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-4d5ab2e439?timeout=10s\": dial tcp 10.200.20.43:6443: connect: connection refused" interval="3.2s" Mar 19 11:36:45.079430 kubelet[2902]: I0319 11:36:45.079239 2902 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:45.081713 kubelet[2902]: E0319 11:36:45.080787 2902 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.43:6443/api/v1/nodes\": dial tcp 10.200.20.43:6443: connect: connection refused" node="ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:45.221783 kubelet[2902]: W0319 11:36:45.221700 2902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Mar 19 11:36:45.221783 kubelet[2902]: E0319 11:36:45.221748 2902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:36:45.490078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount789120088.mount: Deactivated successfully. Mar 19 11:36:45.508579 kubelet[2902]: W0319 11:36:45.508467 2902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Mar 19 11:36:45.508579 kubelet[2902]: E0319 11:36:45.508539 2902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:36:45.524453 containerd[1780]: time="2025-03-19T11:36:45.524381868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:36:45.543740 containerd[1780]: time="2025-03-19T11:36:45.543673921Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 19 11:36:45.552736 containerd[1780]: time="2025-03-19T11:36:45.552690207Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:36:45.560915 containerd[1780]: time="2025-03-19T11:36:45.559853771Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:36:45.567625 containerd[1780]: time="2025-03-19T11:36:45.567453736Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:36:45.573260 containerd[1780]: time="2025-03-19T11:36:45.573018740Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:36:45.575448 kubelet[2902]: W0319 11:36:45.575378 2902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-4d5ab2e439&limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Mar 19 11:36:45.575448 kubelet[2902]: E0319 11:36:45.575426 2902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-4d5ab2e439&limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:36:45.578292 containerd[1780]: time="2025-03-19T11:36:45.576828542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:36:45.578292 containerd[1780]: time="2025-03-19T11:36:45.577739583Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 789.442189ms" Mar 19 11:36:45.584320 containerd[1780]: time="2025-03-19T11:36:45.584266827Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:36:45.593008 containerd[1780]: time="2025-03-19T11:36:45.592963153Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 800.196677ms" Mar 19 11:36:45.649641 containerd[1780]: time="2025-03-19T11:36:45.649422709Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 842.567584ms" Mar 19 11:36:45.883675 kubelet[2902]: I0319 11:36:45.883296 2902 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:45.883675 kubelet[2902]: E0319 11:36:45.883642 2902 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.43:6443/api/v1/nodes\": dial tcp 10.200.20.43:6443: connect: connection refused" node="ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:46.176263 containerd[1780]: time="2025-03-19T11:36:46.175122568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:36:46.176263 containerd[1780]: time="2025-03-19T11:36:46.175193328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:36:46.176263 containerd[1780]: time="2025-03-19T11:36:46.175209208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:36:46.176263 containerd[1780]: time="2025-03-19T11:36:46.175376488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:36:46.178477 containerd[1780]: time="2025-03-19T11:36:46.177643410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:36:46.178477 containerd[1780]: time="2025-03-19T11:36:46.177811010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:36:46.178477 containerd[1780]: time="2025-03-19T11:36:46.177835770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:36:46.178477 containerd[1780]: time="2025-03-19T11:36:46.178122730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:36:46.182009 containerd[1780]: time="2025-03-19T11:36:46.181919133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:36:46.182009 containerd[1780]: time="2025-03-19T11:36:46.181983853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:36:46.182178 containerd[1780]: time="2025-03-19T11:36:46.182002853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:36:46.182178 containerd[1780]: time="2025-03-19T11:36:46.182086053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:36:46.202396 systemd[1]: Started cri-containerd-1a049af42ad6570b7ee9538341211d59401e0e19eb6f1fb4b7ef81f52fdffd58.scope - libcontainer container 1a049af42ad6570b7ee9538341211d59401e0e19eb6f1fb4b7ef81f52fdffd58. Mar 19 11:36:46.207531 systemd[1]: Started cri-containerd-8e5c17d688a58ab110f9082b1fc30d87cc4fd26204bda871aaccd9dd96f42de4.scope - libcontainer container 8e5c17d688a58ab110f9082b1fc30d87cc4fd26204bda871aaccd9dd96f42de4. Mar 19 11:36:46.209358 systemd[1]: Started cri-containerd-b0b66a20d8e3d54a0acf96a3057fc2c6807bc663521011a1a9fec670fa6a4319.scope - libcontainer container b0b66a20d8e3d54a0acf96a3057fc2c6807bc663521011a1a9fec670fa6a4319. Mar 19 11:36:46.259861 containerd[1780]: time="2025-03-19T11:36:46.259808663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.0-a-4d5ab2e439,Uid:9825025e7b619f7d7d0b27d35409b8dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e5c17d688a58ab110f9082b1fc30d87cc4fd26204bda871aaccd9dd96f42de4\"" Mar 19 11:36:46.264294 containerd[1780]: time="2025-03-19T11:36:46.263680865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.0-a-4d5ab2e439,Uid:742a5fbe514c4447d0c72c42be2a8bd7,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a049af42ad6570b7ee9538341211d59401e0e19eb6f1fb4b7ef81f52fdffd58\"" Mar 19 11:36:46.266508 containerd[1780]: time="2025-03-19T11:36:46.266467627Z" level=info msg="CreateContainer within sandbox \"8e5c17d688a58ab110f9082b1fc30d87cc4fd26204bda871aaccd9dd96f42de4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 19 11:36:46.270119 containerd[1780]: time="2025-03-19T11:36:46.270075949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.0-a-4d5ab2e439,Uid:f8e2901140b1ca31e4f9b73fe4e8d683,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0b66a20d8e3d54a0acf96a3057fc2c6807bc663521011a1a9fec670fa6a4319\"" Mar 19 11:36:46.273036 containerd[1780]: time="2025-03-19T11:36:46.272867351Z" level=info msg="CreateContainer within sandbox \"b0b66a20d8e3d54a0acf96a3057fc2c6807bc663521011a1a9fec670fa6a4319\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 19 11:36:46.273494 containerd[1780]: time="2025-03-19T11:36:46.273473672Z" level=info msg="CreateContainer within sandbox \"1a049af42ad6570b7ee9538341211d59401e0e19eb6f1fb4b7ef81f52fdffd58\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 19 11:36:46.339074 kubelet[2902]: W0319 11:36:46.339030 2902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Mar 19 11:36:46.339074 kubelet[2902]: E0319 11:36:46.339078 2902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:36:46.364868 containerd[1780]: time="2025-03-19T11:36:46.364806491Z" level=info msg="CreateContainer within sandbox \"8e5c17d688a58ab110f9082b1fc30d87cc4fd26204bda871aaccd9dd96f42de4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"de2abebf02f4c1e1ecaffc8506b644d5c1b1b560c39286966549ea043aee2d56\"" Mar 19 11:36:46.366661 containerd[1780]: time="2025-03-19T11:36:46.365576651Z" level=info msg="StartContainer for \"de2abebf02f4c1e1ecaffc8506b644d5c1b1b560c39286966549ea043aee2d56\"" Mar 19 11:36:46.389422 systemd[1]: Started cri-containerd-de2abebf02f4c1e1ecaffc8506b644d5c1b1b560c39286966549ea043aee2d56.scope - libcontainer container de2abebf02f4c1e1ecaffc8506b644d5c1b1b560c39286966549ea043aee2d56. Mar 19 11:36:46.395577 containerd[1780]: time="2025-03-19T11:36:46.395532830Z" level=info msg="CreateContainer within sandbox \"1a049af42ad6570b7ee9538341211d59401e0e19eb6f1fb4b7ef81f52fdffd58\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"990a6f098d51142d90dea6fd8595762ee3995929ea063d2847249492d731784f\"" Mar 19 11:36:46.396879 containerd[1780]: time="2025-03-19T11:36:46.396414831Z" level=info msg="StartContainer for \"990a6f098d51142d90dea6fd8595762ee3995929ea063d2847249492d731784f\"" Mar 19 11:36:46.399006 containerd[1780]: time="2025-03-19T11:36:46.398883673Z" level=info msg="CreateContainer within sandbox \"b0b66a20d8e3d54a0acf96a3057fc2c6807bc663521011a1a9fec670fa6a4319\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"22159002cbd58b62b8d8d600b37f1e18342eca2fec74f0200dfa90d1a7e2cb0d\"" Mar 19 11:36:46.400421 containerd[1780]: time="2025-03-19T11:36:46.400388714Z" level=info msg="StartContainer for \"22159002cbd58b62b8d8d600b37f1e18342eca2fec74f0200dfa90d1a7e2cb0d\"" Mar 19 11:36:46.436548 systemd[1]: Started cri-containerd-22159002cbd58b62b8d8d600b37f1e18342eca2fec74f0200dfa90d1a7e2cb0d.scope - libcontainer container 22159002cbd58b62b8d8d600b37f1e18342eca2fec74f0200dfa90d1a7e2cb0d. Mar 19 11:36:46.448108 containerd[1780]: time="2025-03-19T11:36:46.448062744Z" level=info msg="StartContainer for \"de2abebf02f4c1e1ecaffc8506b644d5c1b1b560c39286966549ea043aee2d56\" returns successfully" Mar 19 11:36:46.452962 systemd[1]: Started cri-containerd-990a6f098d51142d90dea6fd8595762ee3995929ea063d2847249492d731784f.scope - libcontainer container 990a6f098d51142d90dea6fd8595762ee3995929ea063d2847249492d731784f. Mar 19 11:36:46.527975 containerd[1780]: time="2025-03-19T11:36:46.527921876Z" level=info msg="StartContainer for \"990a6f098d51142d90dea6fd8595762ee3995929ea063d2847249492d731784f\" returns successfully" Mar 19 11:36:46.528177 containerd[1780]: time="2025-03-19T11:36:46.527921796Z" level=info msg="StartContainer for \"22159002cbd58b62b8d8d600b37f1e18342eca2fec74f0200dfa90d1a7e2cb0d\" returns successfully" Mar 19 11:36:47.485701 kubelet[2902]: I0319 11:36:47.485668 2902 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:48.472253 kubelet[2902]: E0319 11:36:48.472201 2902 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.0-a-4d5ab2e439\" not found" node="ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:48.479032 kubelet[2902]: E0319 11:36:48.478774 2902 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.1.0-a-4d5ab2e439.182e31333f768c96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-a-4d5ab2e439,UID:ci-4230.1.0-a-4d5ab2e439,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-a-4d5ab2e439,},FirstTimestamp:2025-03-19 11:36:42.016787606 +0000 UTC m=+1.085815798,LastTimestamp:2025-03-19 11:36:42.016787606 +0000 UTC m=+1.085815798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-a-4d5ab2e439,}" Mar 19 11:36:48.578035 kubelet[2902]: E0319 11:36:48.577792 2902 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.1.0-a-4d5ab2e439.182e31334016964c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-a-4d5ab2e439,UID:ci-4230.1.0-a-4d5ab2e439,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-a-4d5ab2e439,},FirstTimestamp:2025-03-19 11:36:42.027275852 +0000 UTC m=+1.096304044,LastTimestamp:2025-03-19 11:36:42.027275852 +0000 UTC m=+1.096304044,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-a-4d5ab2e439,}" Mar 19 11:36:48.585936 kubelet[2902]: I0319 11:36:48.585781 2902 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:48.585936 kubelet[2902]: E0319 11:36:48.585820 2902 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4230.1.0-a-4d5ab2e439\": node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:48.646021 kubelet[2902]: E0319 11:36:48.645969 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:48.663660 kubelet[2902]: E0319 11:36:48.663543 2902 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.1.0-a-4d5ab2e439.182e313341e3c328 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-a-4d5ab2e439,UID:ci-4230.1.0-a-4d5ab2e439,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4230.1.0-a-4d5ab2e439 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-a-4d5ab2e439,},FirstTimestamp:2025-03-19 11:36:42.057499432 +0000 UTC m=+1.126527624,LastTimestamp:2025-03-19 11:36:42.057499432 +0000 UTC m=+1.126527624,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-a-4d5ab2e439,}" Mar 19 11:36:48.746558 kubelet[2902]: E0319 11:36:48.746152 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:48.846406 kubelet[2902]: E0319 11:36:48.846355 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:48.947482 kubelet[2902]: E0319 11:36:48.947443 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:49.048580 kubelet[2902]: E0319 11:36:49.048475 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:49.151350 kubelet[2902]: E0319 11:36:49.151296 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:49.252038 kubelet[2902]: E0319 11:36:49.251990 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:49.353632 kubelet[2902]: E0319 11:36:49.353352 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:49.453893 kubelet[2902]: E0319 11:36:49.453845 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:49.554603 kubelet[2902]: E0319 11:36:49.554554 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:49.655086 kubelet[2902]: E0319 11:36:49.655039 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:49.756150 kubelet[2902]: E0319 11:36:49.756105 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:49.857310 kubelet[2902]: E0319 11:36:49.857260 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:49.959737 kubelet[2902]: E0319 11:36:49.959414 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:50.059835 kubelet[2902]: E0319 11:36:50.059793 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:50.160522 kubelet[2902]: E0319 11:36:50.160474 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:50.261171 kubelet[2902]: E0319 11:36:50.261054 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:50.356781 systemd[1]: Reload requested from client PID 3176 ('systemctl') (unit session-7.scope)... Mar 19 11:36:50.356800 systemd[1]: Reloading... Mar 19 11:36:50.362554 kubelet[2902]: E0319 11:36:50.362520 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:50.460341 zram_generator::config[3226]: No configuration found. Mar 19 11:36:50.462887 kubelet[2902]: E0319 11:36:50.462827 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:50.563331 kubelet[2902]: E0319 11:36:50.563058 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:50.578554 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:36:50.663650 kubelet[2902]: E0319 11:36:50.663594 2902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:50.707331 systemd[1]: Reloading finished in 350 ms. Mar 19 11:36:50.731090 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:36:50.750336 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:36:50.750607 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:36:50.750676 systemd[1]: kubelet.service: Consumed 1.400s CPU time, 116.5M memory peak. Mar 19 11:36:50.757633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:36:51.015103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:36:51.018955 (kubelet)[3287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:36:51.059468 kubelet[3287]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:36:51.059868 kubelet[3287]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 11:36:51.059868 kubelet[3287]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:36:51.060036 kubelet[3287]: I0319 11:36:51.059985 3287 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:36:51.067581 kubelet[3287]: I0319 11:36:51.067541 3287 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 19 11:36:51.067889 kubelet[3287]: I0319 11:36:51.067744 3287 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:36:51.068188 kubelet[3287]: I0319 11:36:51.068134 3287 server.go:929] "Client rotation is on, will bootstrap in background" Mar 19 11:36:51.070124 kubelet[3287]: I0319 11:36:51.070073 3287 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 19 11:36:51.072950 kubelet[3287]: I0319 11:36:51.072748 3287 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:36:51.077035 kubelet[3287]: E0319 11:36:51.076944 3287 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:36:51.077347 kubelet[3287]: I0319 11:36:51.077160 3287 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:36:51.080287 kubelet[3287]: I0319 11:36:51.080198 3287 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:36:51.080564 kubelet[3287]: I0319 11:36:51.080511 3287 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 19 11:36:51.081179 kubelet[3287]: I0319 11:36:51.080715 3287 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:36:51.081179 kubelet[3287]: I0319 11:36:51.080745 3287 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.0-a-4d5ab2e439","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:36:51.081179 kubelet[3287]: I0319 11:36:51.080923 3287 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:36:51.081179 kubelet[3287]: I0319 11:36:51.080936 3287 container_manager_linux.go:300] "Creating device plugin manager" Mar 19 11:36:51.081374 kubelet[3287]: I0319 11:36:51.080969 3287 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:36:51.081374 kubelet[3287]: I0319 11:36:51.081087 3287 kubelet.go:408] "Attempting to sync node with API server" Mar 19 11:36:51.081374 kubelet[3287]: I0319 11:36:51.081098 3287 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:36:51.081374 kubelet[3287]: I0319 11:36:51.081119 3287 kubelet.go:314] "Adding apiserver pod source" Mar 19 11:36:51.081374 kubelet[3287]: I0319 11:36:51.081129 3287 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:36:51.083976 kubelet[3287]: I0319 11:36:51.083951 3287 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:36:51.085036 kubelet[3287]: I0319 11:36:51.085007 3287 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:36:51.086306 kubelet[3287]: I0319 11:36:51.086288 3287 server.go:1269] "Started kubelet" Mar 19 11:36:51.089777 kubelet[3287]: I0319 11:36:51.089629 3287 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:36:51.098267 kubelet[3287]: I0319 11:36:51.096286 3287 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:36:51.109600 kubelet[3287]: I0319 11:36:51.096455 3287 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:36:51.111256 kubelet[3287]: I0319 11:36:51.110986 3287 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:36:51.111256 kubelet[3287]: I0319 11:36:51.097891 3287 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 19 11:36:51.112700 kubelet[3287]: I0319 11:36:51.112678 3287 server.go:460] "Adding debug handlers to kubelet server" Mar 19 11:36:51.114131 kubelet[3287]: E0319 11:36:51.097934 3287 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-4d5ab2e439\" not found" Mar 19 11:36:51.117279 kubelet[3287]: I0319 11:36:51.114946 3287 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:36:51.118937 kubelet[3287]: I0319 11:36:51.118198 3287 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:36:51.118937 kubelet[3287]: I0319 11:36:51.118304 3287 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 11:36:51.118937 kubelet[3287]: I0319 11:36:51.118334 3287 kubelet.go:2321] "Starting kubelet main sync loop" Mar 19 11:36:51.118937 kubelet[3287]: E0319 11:36:51.118372 3287 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:36:51.118937 kubelet[3287]: I0319 11:36:51.096809 3287 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:36:51.118937 kubelet[3287]: I0319 11:36:51.097901 3287 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 19 11:36:51.118937 kubelet[3287]: I0319 11:36:51.118684 3287 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:36:51.118937 kubelet[3287]: I0319 11:36:51.115438 3287 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:36:51.145331 kubelet[3287]: I0319 11:36:51.145301 3287 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:36:51.145784 kubelet[3287]: I0319 11:36:51.145763 3287 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:36:51.150185 kubelet[3287]: E0319 11:36:51.150152 3287 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:36:51.200717 kubelet[3287]: I0319 11:36:51.200680 3287 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 19 11:36:51.200717 kubelet[3287]: I0319 11:36:51.200704 3287 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 19 11:36:51.200717 kubelet[3287]: I0319 11:36:51.200726 3287 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:36:51.200893 kubelet[3287]: I0319 11:36:51.200886 3287 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 19 11:36:51.200924 kubelet[3287]: I0319 11:36:51.200897 3287 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 19 11:36:51.200924 kubelet[3287]: I0319 11:36:51.200914 3287 policy_none.go:49] "None policy: Start" Mar 19 11:36:51.201673 kubelet[3287]: I0319 11:36:51.201655 3287 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 11:36:51.202681 kubelet[3287]: I0319 11:36:51.201825 3287 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:36:51.202681 kubelet[3287]: I0319 11:36:51.201995 3287 state_mem.go:75] "Updated machine memory state" Mar 19 11:36:51.206714 kubelet[3287]: I0319 11:36:51.206686 3287 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:36:51.206885 kubelet[3287]: I0319 11:36:51.206863 3287 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:36:51.206967 kubelet[3287]: I0319 11:36:51.206882 3287 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:36:51.207561 kubelet[3287]: I0319 11:36:51.207436 3287 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:36:51.234428 kubelet[3287]: W0319 11:36:51.234197 3287 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 19 11:36:51.238202 kubelet[3287]: W0319 11:36:51.238181 3287 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 19 11:36:51.238535 kubelet[3287]: W0319 11:36:51.238395 3287 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 19 11:36:51.313262 kubelet[3287]: I0319 11:36:51.313142 3287 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:51.320016 kubelet[3287]: I0319 11:36:51.319902 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8e2901140b1ca31e4f9b73fe4e8d683-ca-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-4d5ab2e439\" (UID: \"f8e2901140b1ca31e4f9b73fe4e8d683\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:51.320016 kubelet[3287]: I0319 11:36:51.319938 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8e2901140b1ca31e4f9b73fe4e8d683-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-4d5ab2e439\" (UID: \"f8e2901140b1ca31e4f9b73fe4e8d683\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:51.320016 kubelet[3287]: I0319 11:36:51.319956 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/742a5fbe514c4447d0c72c42be2a8bd7-ca-certs\") pod \"kube-apiserver-ci-4230.1.0-a-4d5ab2e439\" (UID: \"742a5fbe514c4447d0c72c42be2a8bd7\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:51.320016 kubelet[3287]: I0319 11:36:51.319972 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/742a5fbe514c4447d0c72c42be2a8bd7-k8s-certs\") pod \"kube-apiserver-ci-4230.1.0-a-4d5ab2e439\" (UID: \"742a5fbe514c4447d0c72c42be2a8bd7\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:51.320016 kubelet[3287]: I0319 11:36:51.319994 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/742a5fbe514c4447d0c72c42be2a8bd7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.0-a-4d5ab2e439\" (UID: \"742a5fbe514c4447d0c72c42be2a8bd7\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:51.320256 kubelet[3287]: I0319 11:36:51.320010 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8e2901140b1ca31e4f9b73fe4e8d683-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.0-a-4d5ab2e439\" (UID: \"f8e2901140b1ca31e4f9b73fe4e8d683\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:51.320256 kubelet[3287]: I0319 11:36:51.320028 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8e2901140b1ca31e4f9b73fe4e8d683-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.0-a-4d5ab2e439\" (UID: \"f8e2901140b1ca31e4f9b73fe4e8d683\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:51.320256 kubelet[3287]: I0319 11:36:51.320045 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8e2901140b1ca31e4f9b73fe4e8d683-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.0-a-4d5ab2e439\" (UID: \"f8e2901140b1ca31e4f9b73fe4e8d683\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:51.320256 kubelet[3287]: I0319 11:36:51.320061 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9825025e7b619f7d7d0b27d35409b8dc-kubeconfig\") pod \"kube-scheduler-ci-4230.1.0-a-4d5ab2e439\" (UID: \"9825025e7b619f7d7d0b27d35409b8dc\") " pod="kube-system/kube-scheduler-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:51.329806 kubelet[3287]: I0319 11:36:51.329459 3287 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:51.329806 kubelet[3287]: I0319 11:36:51.329551 3287 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:52.083257 kubelet[3287]: I0319 11:36:52.082991 3287 apiserver.go:52] "Watching apiserver" Mar 19 11:36:52.121099 kubelet[3287]: I0319 11:36:52.119209 3287 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 19 11:36:52.188240 kubelet[3287]: W0319 11:36:52.188111 3287 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 19 11:36:52.188240 kubelet[3287]: E0319 11:36:52.188176 3287 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230.1.0-a-4d5ab2e439\" already exists" pod="kube-system/kube-scheduler-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:52.188442 kubelet[3287]: W0319 11:36:52.188419 3287 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 19 11:36:52.188529 kubelet[3287]: E0319 11:36:52.188458 3287 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.0-a-4d5ab2e439\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.0-a-4d5ab2e439" Mar 19 11:36:52.237272 kubelet[3287]: I0319 11:36:52.237176 3287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.0-a-4d5ab2e439" podStartSLOduration=1.237145899 podStartE2EDuration="1.237145899s" podCreationTimestamp="2025-03-19 11:36:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:36:52.221124288 +0000 UTC m=+1.198239569" watchObservedRunningTime="2025-03-19 11:36:52.237145899 +0000 UTC m=+1.214261180" Mar 19 11:36:52.259183 kubelet[3287]: I0319 11:36:52.259114 3287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.0-a-4d5ab2e439" podStartSLOduration=1.259097873 podStartE2EDuration="1.259097873s" podCreationTimestamp="2025-03-19 11:36:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:36:52.237856699 +0000 UTC m=+1.214971980" watchObservedRunningTime="2025-03-19 11:36:52.259097873 +0000 UTC m=+1.236213154" Mar 19 11:36:52.399344 sudo[2299]: pam_unix(sudo:session): session closed for user root Mar 19 11:36:52.469309 sshd[2298]: Connection closed by 10.200.16.10 port 38976 Mar 19 11:36:52.470105 sshd-session[2296]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:52.473173 systemd[1]: sshd@4-10.200.20.43:22-10.200.16.10:38976.service: Deactivated successfully. Mar 19 11:36:52.475114 systemd[1]: session-7.scope: Deactivated successfully. Mar 19 11:36:52.475529 systemd[1]: session-7.scope: Consumed 8.152s CPU time, 217.5M memory peak. Mar 19 11:36:52.477794 systemd-logind[1744]: Session 7 logged out. Waiting for processes to exit. Mar 19 11:36:52.479009 systemd-logind[1744]: Removed session 7. Mar 19 11:36:56.289961 kubelet[3287]: I0319 11:36:56.289535 3287 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 19 11:36:56.290455 containerd[1780]: time="2025-03-19T11:36:56.289863038Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 19 11:36:56.291004 kubelet[3287]: I0319 11:36:56.290791 3287 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 19 11:36:57.047262 kubelet[3287]: I0319 11:36:57.045415 3287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.0-a-4d5ab2e439" podStartSLOduration=6.045383055 podStartE2EDuration="6.045383055s" podCreationTimestamp="2025-03-19 11:36:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:36:52.259432553 +0000 UTC m=+1.236547834" watchObservedRunningTime="2025-03-19 11:36:57.045383055 +0000 UTC m=+6.022498336" Mar 19 11:36:57.056274 systemd[1]: Created slice kubepods-burstable-pod50d9e733_a9f5_42ee_8267_4619d0da259c.slice - libcontainer container kubepods-burstable-pod50d9e733_a9f5_42ee_8267_4619d0da259c.slice. Mar 19 11:36:57.061625 kubelet[3287]: I0319 11:36:57.060755 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/50d9e733-a9f5-42ee-8267-4619d0da259c-cni\") pod \"kube-flannel-ds-6zd7z\" (UID: \"50d9e733-a9f5-42ee-8267-4619d0da259c\") " pod="kube-flannel/kube-flannel-ds-6zd7z" Mar 19 11:36:57.061625 kubelet[3287]: I0319 11:36:57.060803 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/50d9e733-a9f5-42ee-8267-4619d0da259c-run\") pod \"kube-flannel-ds-6zd7z\" (UID: \"50d9e733-a9f5-42ee-8267-4619d0da259c\") " pod="kube-flannel/kube-flannel-ds-6zd7z" Mar 19 11:36:57.061625 kubelet[3287]: I0319 11:36:57.060822 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/50d9e733-a9f5-42ee-8267-4619d0da259c-cni-plugin\") pod \"kube-flannel-ds-6zd7z\" (UID: \"50d9e733-a9f5-42ee-8267-4619d0da259c\") " pod="kube-flannel/kube-flannel-ds-6zd7z" Mar 19 11:36:57.061625 kubelet[3287]: I0319 11:36:57.060838 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/50d9e733-a9f5-42ee-8267-4619d0da259c-flannel-cfg\") pod \"kube-flannel-ds-6zd7z\" (UID: \"50d9e733-a9f5-42ee-8267-4619d0da259c\") " pod="kube-flannel/kube-flannel-ds-6zd7z" Mar 19 11:36:57.061625 kubelet[3287]: I0319 11:36:57.060855 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50d9e733-a9f5-42ee-8267-4619d0da259c-xtables-lock\") pod \"kube-flannel-ds-6zd7z\" (UID: \"50d9e733-a9f5-42ee-8267-4619d0da259c\") " pod="kube-flannel/kube-flannel-ds-6zd7z" Mar 19 11:36:57.061841 kubelet[3287]: I0319 11:36:57.060885 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6m4t\" (UniqueName: \"kubernetes.io/projected/50d9e733-a9f5-42ee-8267-4619d0da259c-kube-api-access-c6m4t\") pod \"kube-flannel-ds-6zd7z\" (UID: \"50d9e733-a9f5-42ee-8267-4619d0da259c\") " pod="kube-flannel/kube-flannel-ds-6zd7z" Mar 19 11:36:57.071647 systemd[1]: Created slice kubepods-besteffort-pod02843736_0d93_459b_9f91_12b64935018f.slice - libcontainer container kubepods-besteffort-pod02843736_0d93_459b_9f91_12b64935018f.slice. Mar 19 11:36:57.162424 kubelet[3287]: I0319 11:36:57.161773 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79tz9\" (UniqueName: \"kubernetes.io/projected/02843736-0d93-459b-9f91-12b64935018f-kube-api-access-79tz9\") pod \"kube-proxy-828l4\" (UID: \"02843736-0d93-459b-9f91-12b64935018f\") " pod="kube-system/kube-proxy-828l4" Mar 19 11:36:57.162424 kubelet[3287]: I0319 11:36:57.161826 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02843736-0d93-459b-9f91-12b64935018f-lib-modules\") pod \"kube-proxy-828l4\" (UID: \"02843736-0d93-459b-9f91-12b64935018f\") " pod="kube-system/kube-proxy-828l4" Mar 19 11:36:57.162424 kubelet[3287]: I0319 11:36:57.161862 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/02843736-0d93-459b-9f91-12b64935018f-kube-proxy\") pod \"kube-proxy-828l4\" (UID: \"02843736-0d93-459b-9f91-12b64935018f\") " pod="kube-system/kube-proxy-828l4" Mar 19 11:36:57.162424 kubelet[3287]: I0319 11:36:57.161880 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02843736-0d93-459b-9f91-12b64935018f-xtables-lock\") pod \"kube-proxy-828l4\" (UID: \"02843736-0d93-459b-9f91-12b64935018f\") " pod="kube-system/kube-proxy-828l4" Mar 19 11:36:57.171286 kubelet[3287]: E0319 11:36:57.170530 3287 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 19 11:36:57.171286 kubelet[3287]: E0319 11:36:57.170567 3287 projected.go:194] Error preparing data for projected volume kube-api-access-c6m4t for pod kube-flannel/kube-flannel-ds-6zd7z: configmap "kube-root-ca.crt" not found Mar 19 11:36:57.171286 kubelet[3287]: E0319 11:36:57.170656 3287 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50d9e733-a9f5-42ee-8267-4619d0da259c-kube-api-access-c6m4t podName:50d9e733-a9f5-42ee-8267-4619d0da259c nodeName:}" failed. No retries permitted until 2025-03-19 11:36:57.670635178 +0000 UTC m=+6.647750459 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c6m4t" (UniqueName: "kubernetes.io/projected/50d9e733-a9f5-42ee-8267-4619d0da259c-kube-api-access-c6m4t") pod "kube-flannel-ds-6zd7z" (UID: "50d9e733-a9f5-42ee-8267-4619d0da259c") : configmap "kube-root-ca.crt" not found Mar 19 11:36:57.382047 containerd[1780]: time="2025-03-19T11:36:57.381726477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-828l4,Uid:02843736-0d93-459b-9f91-12b64935018f,Namespace:kube-system,Attempt:0,}" Mar 19 11:36:57.549218 containerd[1780]: time="2025-03-19T11:36:57.549098747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:36:57.549669 containerd[1780]: time="2025-03-19T11:36:57.549604187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:36:57.549889 containerd[1780]: time="2025-03-19T11:36:57.549739628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:36:57.550045 containerd[1780]: time="2025-03-19T11:36:57.550006948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:36:57.566131 systemd[1]: run-containerd-runc-k8s.io-ec6dff9ffbc638736c33f7e5f7386bd2bdf50aadfe0b4335df7e1cb80ebd2eaf-runc.Dxhc3f.mount: Deactivated successfully. Mar 19 11:36:57.576400 systemd[1]: Started cri-containerd-ec6dff9ffbc638736c33f7e5f7386bd2bdf50aadfe0b4335df7e1cb80ebd2eaf.scope - libcontainer container ec6dff9ffbc638736c33f7e5f7386bd2bdf50aadfe0b4335df7e1cb80ebd2eaf. Mar 19 11:36:57.599731 containerd[1780]: time="2025-03-19T11:36:57.599618660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-828l4,Uid:02843736-0d93-459b-9f91-12b64935018f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec6dff9ffbc638736c33f7e5f7386bd2bdf50aadfe0b4335df7e1cb80ebd2eaf\"" Mar 19 11:36:57.603666 containerd[1780]: time="2025-03-19T11:36:57.603619543Z" level=info msg="CreateContainer within sandbox \"ec6dff9ffbc638736c33f7e5f7386bd2bdf50aadfe0b4335df7e1cb80ebd2eaf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 19 11:36:57.676424 containerd[1780]: time="2025-03-19T11:36:57.676289551Z" level=info msg="CreateContainer within sandbox \"ec6dff9ffbc638736c33f7e5f7386bd2bdf50aadfe0b4335df7e1cb80ebd2eaf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"23167ab0e96ab5dd659f554bb93ebc13d7e57a06f4db331c46ff31d8316308d2\"" Mar 19 11:36:57.676424 containerd[1780]: time="2025-03-19T11:36:57.676968791Z" level=info msg="StartContainer for \"23167ab0e96ab5dd659f554bb93ebc13d7e57a06f4db331c46ff31d8316308d2\"" Mar 19 11:36:57.702434 systemd[1]: Started cri-containerd-23167ab0e96ab5dd659f554bb93ebc13d7e57a06f4db331c46ff31d8316308d2.scope - libcontainer container 23167ab0e96ab5dd659f554bb93ebc13d7e57a06f4db331c46ff31d8316308d2. Mar 19 11:36:57.735482 containerd[1780]: time="2025-03-19T11:36:57.735428190Z" level=info msg="StartContainer for \"23167ab0e96ab5dd659f554bb93ebc13d7e57a06f4db331c46ff31d8316308d2\" returns successfully" Mar 19 11:36:57.966574 containerd[1780]: time="2025-03-19T11:36:57.966457942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6zd7z,Uid:50d9e733-a9f5-42ee-8267-4619d0da259c,Namespace:kube-flannel,Attempt:0,}" Mar 19 11:36:58.062658 containerd[1780]: time="2025-03-19T11:36:58.062536605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:36:58.062658 containerd[1780]: time="2025-03-19T11:36:58.062592285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:36:58.062658 containerd[1780]: time="2025-03-19T11:36:58.062603805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:36:58.062884 containerd[1780]: time="2025-03-19T11:36:58.062678005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:36:58.082430 systemd[1]: Started cri-containerd-f6b8f27368f21b9650a447dc5d523023fbf5fd21d8c7a43d246aa1f8463a6080.scope - libcontainer container f6b8f27368f21b9650a447dc5d523023fbf5fd21d8c7a43d246aa1f8463a6080. Mar 19 11:36:58.114104 containerd[1780]: time="2025-03-19T11:36:58.114054399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6zd7z,Uid:50d9e733-a9f5-42ee-8267-4619d0da259c,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"f6b8f27368f21b9650a447dc5d523023fbf5fd21d8c7a43d246aa1f8463a6080\"" Mar 19 11:36:58.117794 containerd[1780]: time="2025-03-19T11:36:58.117742522Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Mar 19 11:36:59.123770 kubelet[3287]: I0319 11:36:59.123525 3287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-828l4" podStartSLOduration=2.123509584 podStartE2EDuration="2.123509584s" podCreationTimestamp="2025-03-19 11:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:36:58.201236577 +0000 UTC m=+7.178351858" watchObservedRunningTime="2025-03-19 11:36:59.123509584 +0000 UTC m=+8.100624865" Mar 19 11:37:00.066678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount333964869.mount: Deactivated successfully. Mar 19 11:37:00.186270 containerd[1780]: time="2025-03-19T11:37:00.186166684Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:00.189308 containerd[1780]: time="2025-03-19T11:37:00.189243206Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Mar 19 11:37:00.195870 containerd[1780]: time="2025-03-19T11:37:00.195818050Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:00.208137 containerd[1780]: time="2025-03-19T11:37:00.205253416Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:00.208137 containerd[1780]: time="2025-03-19T11:37:00.206064337Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.088279815s" Mar 19 11:37:00.208137 containerd[1780]: time="2025-03-19T11:37:00.206089937Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Mar 19 11:37:00.216340 containerd[1780]: time="2025-03-19T11:37:00.215101343Z" level=info msg="CreateContainer within sandbox \"f6b8f27368f21b9650a447dc5d523023fbf5fd21d8c7a43d246aa1f8463a6080\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 19 11:37:00.272425 containerd[1780]: time="2025-03-19T11:37:00.272381100Z" level=info msg="CreateContainer within sandbox \"f6b8f27368f21b9650a447dc5d523023fbf5fd21d8c7a43d246aa1f8463a6080\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"52d7d47026f56edeed4bd741c3ad2b9b115b242fd436116b095b235df7354004\"" Mar 19 11:37:00.273079 containerd[1780]: time="2025-03-19T11:37:00.273054901Z" level=info msg="StartContainer for \"52d7d47026f56edeed4bd741c3ad2b9b115b242fd436116b095b235df7354004\"" Mar 19 11:37:00.298407 systemd[1]: Started cri-containerd-52d7d47026f56edeed4bd741c3ad2b9b115b242fd436116b095b235df7354004.scope - libcontainer container 52d7d47026f56edeed4bd741c3ad2b9b115b242fd436116b095b235df7354004. Mar 19 11:37:00.324682 systemd[1]: cri-containerd-52d7d47026f56edeed4bd741c3ad2b9b115b242fd436116b095b235df7354004.scope: Deactivated successfully. Mar 19 11:37:00.330602 containerd[1780]: time="2025-03-19T11:37:00.330555979Z" level=info msg="StartContainer for \"52d7d47026f56edeed4bd741c3ad2b9b115b242fd436116b095b235df7354004\" returns successfully" Mar 19 11:37:00.393959 containerd[1780]: time="2025-03-19T11:37:00.393890660Z" level=info msg="shim disconnected" id=52d7d47026f56edeed4bd741c3ad2b9b115b242fd436116b095b235df7354004 namespace=k8s.io Mar 19 11:37:00.393959 containerd[1780]: time="2025-03-19T11:37:00.393949580Z" level=warning msg="cleaning up after shim disconnected" id=52d7d47026f56edeed4bd741c3ad2b9b115b242fd436116b095b235df7354004 namespace=k8s.io Mar 19 11:37:00.393959 containerd[1780]: time="2025-03-19T11:37:00.393957500Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:37:00.997622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52d7d47026f56edeed4bd741c3ad2b9b115b242fd436116b095b235df7354004-rootfs.mount: Deactivated successfully. Mar 19 11:37:01.198420 containerd[1780]: time="2025-03-19T11:37:01.198347510Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Mar 19 11:37:03.165623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4539513.mount: Deactivated successfully. Mar 19 11:37:04.350342 containerd[1780]: time="2025-03-19T11:37:04.349239257Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:04.354118 containerd[1780]: time="2025-03-19T11:37:04.354033059Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Mar 19 11:37:04.358898 containerd[1780]: time="2025-03-19T11:37:04.358837102Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:04.366496 containerd[1780]: time="2025-03-19T11:37:04.365207626Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:37:04.366496 containerd[1780]: time="2025-03-19T11:37:04.366360866Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.167970676s" Mar 19 11:37:04.366496 containerd[1780]: time="2025-03-19T11:37:04.366391746Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Mar 19 11:37:04.370504 containerd[1780]: time="2025-03-19T11:37:04.370475428Z" level=info msg="CreateContainer within sandbox \"f6b8f27368f21b9650a447dc5d523023fbf5fd21d8c7a43d246aa1f8463a6080\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 19 11:37:04.417752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007855788.mount: Deactivated successfully. Mar 19 11:37:04.438369 containerd[1780]: time="2025-03-19T11:37:04.438310346Z" level=info msg="CreateContainer within sandbox \"f6b8f27368f21b9650a447dc5d523023fbf5fd21d8c7a43d246aa1f8463a6080\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"513faf3f8e25c67020b464fc46c01e907d59eef82bdc79911a71ed34d38ec386\"" Mar 19 11:37:04.438970 containerd[1780]: time="2025-03-19T11:37:04.438868027Z" level=info msg="StartContainer for \"513faf3f8e25c67020b464fc46c01e907d59eef82bdc79911a71ed34d38ec386\"" Mar 19 11:37:04.469427 systemd[1]: Started cri-containerd-513faf3f8e25c67020b464fc46c01e907d59eef82bdc79911a71ed34d38ec386.scope - libcontainer container 513faf3f8e25c67020b464fc46c01e907d59eef82bdc79911a71ed34d38ec386. Mar 19 11:37:04.491408 systemd[1]: cri-containerd-513faf3f8e25c67020b464fc46c01e907d59eef82bdc79911a71ed34d38ec386.scope: Deactivated successfully. Mar 19 11:37:04.498231 containerd[1780]: time="2025-03-19T11:37:04.497496779Z" level=info msg="StartContainer for \"513faf3f8e25c67020b464fc46c01e907d59eef82bdc79911a71ed34d38ec386\" returns successfully" Mar 19 11:37:04.513745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-513faf3f8e25c67020b464fc46c01e907d59eef82bdc79911a71ed34d38ec386-rootfs.mount: Deactivated successfully. Mar 19 11:37:04.545491 kubelet[3287]: I0319 11:37:04.545276 3287 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 19 11:37:04.608101 systemd[1]: Created slice kubepods-burstable-poda4450b4e_0fd8_439b_9bd9_842f406c7bd3.slice - libcontainer container kubepods-burstable-poda4450b4e_0fd8_439b_9bd9_842f406c7bd3.slice. Mar 19 11:37:04.612944 kubelet[3287]: I0319 11:37:04.612836 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45e4d4f8-7f5f-44c5-a3c8-8b4dccac7e09-config-volume\") pod \"coredns-6f6b679f8f-c2q8b\" (UID: \"45e4d4f8-7f5f-44c5-a3c8-8b4dccac7e09\") " pod="kube-system/coredns-6f6b679f8f-c2q8b" Mar 19 11:37:04.612944 kubelet[3287]: I0319 11:37:04.612906 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2mvb\" (UniqueName: \"kubernetes.io/projected/45e4d4f8-7f5f-44c5-a3c8-8b4dccac7e09-kube-api-access-h2mvb\") pod \"coredns-6f6b679f8f-c2q8b\" (UID: \"45e4d4f8-7f5f-44c5-a3c8-8b4dccac7e09\") " pod="kube-system/coredns-6f6b679f8f-c2q8b" Mar 19 11:37:04.612944 kubelet[3287]: I0319 11:37:04.612928 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr7wg\" (UniqueName: \"kubernetes.io/projected/a4450b4e-0fd8-439b-9bd9-842f406c7bd3-kube-api-access-zr7wg\") pod \"coredns-6f6b679f8f-sdbtz\" (UID: \"a4450b4e-0fd8-439b-9bd9-842f406c7bd3\") " pod="kube-system/coredns-6f6b679f8f-sdbtz" Mar 19 11:37:04.612944 kubelet[3287]: I0319 11:37:04.612945 3287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4450b4e-0fd8-439b-9bd9-842f406c7bd3-config-volume\") pod \"coredns-6f6b679f8f-sdbtz\" (UID: \"a4450b4e-0fd8-439b-9bd9-842f406c7bd3\") " pod="kube-system/coredns-6f6b679f8f-sdbtz" Mar 19 11:37:04.620421 systemd[1]: Created slice kubepods-burstable-pod45e4d4f8_7f5f_44c5_a3c8_8b4dccac7e09.slice - libcontainer container kubepods-burstable-pod45e4d4f8_7f5f_44c5_a3c8_8b4dccac7e09.slice. Mar 19 11:37:04.916122 containerd[1780]: time="2025-03-19T11:37:04.915750892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sdbtz,Uid:a4450b4e-0fd8-439b-9bd9-842f406c7bd3,Namespace:kube-system,Attempt:0,}" Mar 19 11:37:04.924182 containerd[1780]: time="2025-03-19T11:37:04.924129736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c2q8b,Uid:45e4d4f8-7f5f-44c5-a3c8-8b4dccac7e09,Namespace:kube-system,Attempt:0,}" Mar 19 11:37:05.007610 containerd[1780]: time="2025-03-19T11:37:05.007515583Z" level=info msg="shim disconnected" id=513faf3f8e25c67020b464fc46c01e907d59eef82bdc79911a71ed34d38ec386 namespace=k8s.io Mar 19 11:37:05.007610 containerd[1780]: time="2025-03-19T11:37:05.007567383Z" level=warning msg="cleaning up after shim disconnected" id=513faf3f8e25c67020b464fc46c01e907d59eef82bdc79911a71ed34d38ec386 namespace=k8s.io Mar 19 11:37:05.007610 containerd[1780]: time="2025-03-19T11:37:05.007574943Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:37:05.100889 containerd[1780]: time="2025-03-19T11:37:05.100748795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sdbtz,Uid:a4450b4e-0fd8-439b-9bd9-842f406c7bd3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"af7e3668957a0303625de589a6aaba774f8af4e63a9bd8e4e8309bd0beb04278\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 19 11:37:05.101446 kubelet[3287]: E0319 11:37:05.101158 3287 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af7e3668957a0303625de589a6aaba774f8af4e63a9bd8e4e8309bd0beb04278\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 19 11:37:05.101446 kubelet[3287]: E0319 11:37:05.101246 3287 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af7e3668957a0303625de589a6aaba774f8af4e63a9bd8e4e8309bd0beb04278\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-sdbtz" Mar 19 11:37:05.101446 kubelet[3287]: E0319 11:37:05.101266 3287 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af7e3668957a0303625de589a6aaba774f8af4e63a9bd8e4e8309bd0beb04278\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-sdbtz" Mar 19 11:37:05.101446 kubelet[3287]: E0319 11:37:05.101313 3287 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-sdbtz_kube-system(a4450b4e-0fd8-439b-9bd9-842f406c7bd3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-sdbtz_kube-system(a4450b4e-0fd8-439b-9bd9-842f406c7bd3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af7e3668957a0303625de589a6aaba774f8af4e63a9bd8e4e8309bd0beb04278\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-sdbtz" podUID="a4450b4e-0fd8-439b-9bd9-842f406c7bd3" Mar 19 11:37:05.106784 containerd[1780]: time="2025-03-19T11:37:05.106717358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c2q8b,Uid:45e4d4f8-7f5f-44c5-a3c8-8b4dccac7e09,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a6322245ebdb04ba64072d0fa62ba0a9f6c41a6535e541b189bc7c394a902ea\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 19 11:37:05.106996 kubelet[3287]: E0319 11:37:05.106935 3287 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a6322245ebdb04ba64072d0fa62ba0a9f6c41a6535e541b189bc7c394a902ea\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 19 11:37:05.107052 kubelet[3287]: E0319 11:37:05.107016 3287 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a6322245ebdb04ba64072d0fa62ba0a9f6c41a6535e541b189bc7c394a902ea\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-c2q8b" Mar 19 11:37:05.107052 kubelet[3287]: E0319 11:37:05.107038 3287 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a6322245ebdb04ba64072d0fa62ba0a9f6c41a6535e541b189bc7c394a902ea\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-c2q8b" Mar 19 11:37:05.107124 kubelet[3287]: E0319 11:37:05.107077 3287 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-c2q8b_kube-system(45e4d4f8-7f5f-44c5-a3c8-8b4dccac7e09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-c2q8b_kube-system(45e4d4f8-7f5f-44c5-a3c8-8b4dccac7e09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a6322245ebdb04ba64072d0fa62ba0a9f6c41a6535e541b189bc7c394a902ea\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-c2q8b" podUID="45e4d4f8-7f5f-44c5-a3c8-8b4dccac7e09" Mar 19 11:37:05.207941 containerd[1780]: time="2025-03-19T11:37:05.207827894Z" level=info msg="CreateContainer within sandbox \"f6b8f27368f21b9650a447dc5d523023fbf5fd21d8c7a43d246aa1f8463a6080\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 19 11:37:05.256289 containerd[1780]: time="2025-03-19T11:37:05.256121721Z" level=info msg="CreateContainer within sandbox \"f6b8f27368f21b9650a447dc5d523023fbf5fd21d8c7a43d246aa1f8463a6080\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"7710f82588e2cea4d0edb56c97df1ea27297e0c5ac2ca6f3befa640ddaffa815\"" Mar 19 11:37:05.258278 containerd[1780]: time="2025-03-19T11:37:05.257017081Z" level=info msg="StartContainer for \"7710f82588e2cea4d0edb56c97df1ea27297e0c5ac2ca6f3befa640ddaffa815\"" Mar 19 11:37:05.284419 systemd[1]: Started cri-containerd-7710f82588e2cea4d0edb56c97df1ea27297e0c5ac2ca6f3befa640ddaffa815.scope - libcontainer container 7710f82588e2cea4d0edb56c97df1ea27297e0c5ac2ca6f3befa640ddaffa815. Mar 19 11:37:05.312520 containerd[1780]: time="2025-03-19T11:37:05.312398952Z" level=info msg="StartContainer for \"7710f82588e2cea4d0edb56c97df1ea27297e0c5ac2ca6f3befa640ddaffa815\" returns successfully" Mar 19 11:37:06.224150 kubelet[3287]: I0319 11:37:06.223453 3287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-6zd7z" podStartSLOduration=2.972459233 podStartE2EDuration="9.223436099s" podCreationTimestamp="2025-03-19 11:36:57 +0000 UTC" firstStartedPulling="2025-03-19 11:36:58.116513401 +0000 UTC m=+7.093628682" lastFinishedPulling="2025-03-19 11:37:04.367490267 +0000 UTC m=+13.344605548" observedRunningTime="2025-03-19 11:37:06.223369299 +0000 UTC m=+15.200484580" watchObservedRunningTime="2025-03-19 11:37:06.223436099 +0000 UTC m=+15.200551380" Mar 19 11:37:06.443097 systemd-networkd[1492]: flannel.1: Link UP Mar 19 11:37:06.443108 systemd-networkd[1492]: flannel.1: Gained carrier Mar 19 11:37:08.352372 systemd-networkd[1492]: flannel.1: Gained IPv6LL Mar 19 11:37:16.119595 containerd[1780]: time="2025-03-19T11:37:16.119549113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c2q8b,Uid:45e4d4f8-7f5f-44c5-a3c8-8b4dccac7e09,Namespace:kube-system,Attempt:0,}" Mar 19 11:37:16.425207 systemd-networkd[1492]: cni0: Link UP Mar 19 11:37:16.425215 systemd-networkd[1492]: cni0: Gained carrier Mar 19 11:37:16.429535 systemd-networkd[1492]: cni0: Lost carrier Mar 19 11:37:16.458867 systemd-networkd[1492]: vethbb496037: Link UP Mar 19 11:37:16.468461 kernel: cni0: port 1(vethbb496037) entered blocking state Mar 19 11:37:16.468551 kernel: cni0: port 1(vethbb496037) entered disabled state Mar 19 11:37:16.474525 kernel: vethbb496037: entered allmulticast mode Mar 19 11:37:16.478303 kernel: vethbb496037: entered promiscuous mode Mar 19 11:37:16.483354 kernel: cni0: port 1(vethbb496037) entered blocking state Mar 19 11:37:16.483435 kernel: cni0: port 1(vethbb496037) entered forwarding state Mar 19 11:37:16.491296 kernel: cni0: port 1(vethbb496037) entered disabled state Mar 19 11:37:16.503484 kernel: cni0: port 1(vethbb496037) entered blocking state Mar 19 11:37:16.503984 kernel: cni0: port 1(vethbb496037) entered forwarding state Mar 19 11:37:16.503610 systemd-networkd[1492]: vethbb496037: Gained carrier Mar 19 11:37:16.504939 systemd-networkd[1492]: cni0: Gained carrier Mar 19 11:37:16.506438 containerd[1780]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000948e8), "name":"cbr0", "type":"bridge"} Mar 19 11:37:16.506438 containerd[1780]: delegateAdd: netconf sent to delegate plugin: Mar 19 11:37:16.529130 containerd[1780]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-03-19T11:37:16.529036180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:37:16.529648 containerd[1780]: time="2025-03-19T11:37:16.529605820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:37:16.529840 containerd[1780]: time="2025-03-19T11:37:16.529809460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:37:16.529976 containerd[1780]: time="2025-03-19T11:37:16.529940980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:37:16.552911 systemd[1]: run-containerd-runc-k8s.io-62059e93761a00fbf76d1e103304224c4e98d3c7d60530fbbe965892bd875959-runc.bgjbPc.mount: Deactivated successfully. Mar 19 11:37:16.562410 systemd[1]: Started cri-containerd-62059e93761a00fbf76d1e103304224c4e98d3c7d60530fbbe965892bd875959.scope - libcontainer container 62059e93761a00fbf76d1e103304224c4e98d3c7d60530fbbe965892bd875959. Mar 19 11:37:16.593985 containerd[1780]: time="2025-03-19T11:37:16.593925856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c2q8b,Uid:45e4d4f8-7f5f-44c5-a3c8-8b4dccac7e09,Namespace:kube-system,Attempt:0,} returns sandbox id \"62059e93761a00fbf76d1e103304224c4e98d3c7d60530fbbe965892bd875959\"" Mar 19 11:37:16.599160 containerd[1780]: time="2025-03-19T11:37:16.599109099Z" level=info msg="CreateContainer within sandbox \"62059e93761a00fbf76d1e103304224c4e98d3c7d60530fbbe965892bd875959\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:37:16.830610 containerd[1780]: time="2025-03-19T11:37:16.830328827Z" level=info msg="CreateContainer within sandbox \"62059e93761a00fbf76d1e103304224c4e98d3c7d60530fbbe965892bd875959\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0dc8369eb5dd3085fb7f8a3d9f257d25fd478321be7a1bf7377423cd5497ceb2\"" Mar 19 11:37:16.832760 containerd[1780]: time="2025-03-19T11:37:16.831043267Z" level=info msg="StartContainer for \"0dc8369eb5dd3085fb7f8a3d9f257d25fd478321be7a1bf7377423cd5497ceb2\"" Mar 19 11:37:16.854420 systemd[1]: Started cri-containerd-0dc8369eb5dd3085fb7f8a3d9f257d25fd478321be7a1bf7377423cd5497ceb2.scope - libcontainer container 0dc8369eb5dd3085fb7f8a3d9f257d25fd478321be7a1bf7377423cd5497ceb2. Mar 19 11:37:16.883558 containerd[1780]: time="2025-03-19T11:37:16.883498456Z" level=info msg="StartContainer for \"0dc8369eb5dd3085fb7f8a3d9f257d25fd478321be7a1bf7377423cd5497ceb2\" returns successfully" Mar 19 11:37:17.121132 containerd[1780]: time="2025-03-19T11:37:17.120620587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sdbtz,Uid:a4450b4e-0fd8-439b-9bd9-842f406c7bd3,Namespace:kube-system,Attempt:0,}" Mar 19 11:37:17.247206 kubelet[3287]: I0319 11:37:17.245654 3287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-c2q8b" podStartSLOduration=20.245639056 podStartE2EDuration="20.245639056s" podCreationTimestamp="2025-03-19 11:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:37:17.245470376 +0000 UTC m=+26.222585657" watchObservedRunningTime="2025-03-19 11:37:17.245639056 +0000 UTC m=+26.222754337" Mar 19 11:37:17.330812 systemd-networkd[1492]: vetha5e1bf95: Link UP Mar 19 11:37:17.341265 kernel: cni0: port 2(vetha5e1bf95) entered blocking state Mar 19 11:37:17.341355 kernel: cni0: port 2(vetha5e1bf95) entered disabled state Mar 19 11:37:17.341386 kernel: vetha5e1bf95: entered allmulticast mode Mar 19 11:37:17.348404 kernel: vetha5e1bf95: entered promiscuous mode Mar 19 11:37:17.354185 kernel: cni0: port 2(vetha5e1bf95) entered blocking state Mar 19 11:37:17.354260 kernel: cni0: port 2(vetha5e1bf95) entered forwarding state Mar 19 11:37:17.361942 systemd-networkd[1492]: vetha5e1bf95: Gained carrier Mar 19 11:37:17.364011 containerd[1780]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001a938), "name":"cbr0", "type":"bridge"} Mar 19 11:37:17.364011 containerd[1780]: delegateAdd: netconf sent to delegate plugin: Mar 19 11:37:17.390726 containerd[1780]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-03-19T11:37:17.390456976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:37:17.390726 containerd[1780]: time="2025-03-19T11:37:17.390525896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:37:17.390726 containerd[1780]: time="2025-03-19T11:37:17.390542856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:37:17.390726 containerd[1780]: time="2025-03-19T11:37:17.390618417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:37:17.413554 systemd[1]: Started cri-containerd-183971ce5f1481bcc317e6a80c59a86970a1d70d72d85366adc8a7489aa4954c.scope - libcontainer container 183971ce5f1481bcc317e6a80c59a86970a1d70d72d85366adc8a7489aa4954c. Mar 19 11:37:17.454586 containerd[1780]: time="2025-03-19T11:37:17.454496332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sdbtz,Uid:a4450b4e-0fd8-439b-9bd9-842f406c7bd3,Namespace:kube-system,Attempt:0,} returns sandbox id \"183971ce5f1481bcc317e6a80c59a86970a1d70d72d85366adc8a7489aa4954c\"" Mar 19 11:37:17.460252 containerd[1780]: time="2025-03-19T11:37:17.460104655Z" level=info msg="CreateContainer within sandbox \"183971ce5f1481bcc317e6a80c59a86970a1d70d72d85366adc8a7489aa4954c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:37:17.628414 containerd[1780]: time="2025-03-19T11:37:17.628363028Z" level=info msg="CreateContainer within sandbox \"183971ce5f1481bcc317e6a80c59a86970a1d70d72d85366adc8a7489aa4954c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"771f88c6e5c30545fd3c116d3b23c669e34a7924437f9345135fd30661830ced\"" Mar 19 11:37:17.629147 containerd[1780]: time="2025-03-19T11:37:17.629102709Z" level=info msg="StartContainer for \"771f88c6e5c30545fd3c116d3b23c669e34a7924437f9345135fd30661830ced\"" Mar 19 11:37:17.667438 systemd[1]: Started cri-containerd-771f88c6e5c30545fd3c116d3b23c669e34a7924437f9345135fd30661830ced.scope - libcontainer container 771f88c6e5c30545fd3c116d3b23c669e34a7924437f9345135fd30661830ced. Mar 19 11:37:17.703944 containerd[1780]: time="2025-03-19T11:37:17.703883510Z" level=info msg="StartContainer for \"771f88c6e5c30545fd3c116d3b23c669e34a7924437f9345135fd30661830ced\" returns successfully" Mar 19 11:37:17.825373 systemd-networkd[1492]: vethbb496037: Gained IPv6LL Mar 19 11:37:17.952376 systemd-networkd[1492]: cni0: Gained IPv6LL Mar 19 11:37:18.253616 kubelet[3287]: I0319 11:37:18.253487 3287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-sdbtz" podStartSLOduration=21.253468294 podStartE2EDuration="21.253468294s" podCreationTimestamp="2025-03-19 11:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:37:18.251680853 +0000 UTC m=+27.228796134" watchObservedRunningTime="2025-03-19 11:37:18.253468294 +0000 UTC m=+27.230583575" Mar 19 11:37:18.592370 systemd-networkd[1492]: vetha5e1bf95: Gained IPv6LL Mar 19 11:37:28.960269 waagent[1983]: 2025-03-19T11:37:28.959359Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Mar 19 11:37:28.967899 waagent[1983]: 2025-03-19T11:37:28.967823Z INFO ExtHandler Mar 19 11:37:28.967985 waagent[1983]: 2025-03-19T11:37:28.967953Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Mar 19 11:37:29.033694 waagent[1983]: 2025-03-19T11:37:29.033633Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 19 11:37:29.127455 waagent[1983]: 2025-03-19T11:37:29.127321Z INFO ExtHandler Downloaded certificate {'thumbprint': 'DF140C0F4B50F8696882898DD7705A0A4C72C180', 'hasPrivateKey': True} Mar 19 11:37:29.127978 waagent[1983]: 2025-03-19T11:37:29.127925Z INFO ExtHandler Downloaded certificate {'thumbprint': '0C616404AF2C2D376D573474A235391556947967', 'hasPrivateKey': False} Mar 19 11:37:29.128510 waagent[1983]: 2025-03-19T11:37:29.128457Z INFO ExtHandler Fetch goal state completed Mar 19 11:37:29.129025 waagent[1983]: 2025-03-19T11:37:29.128971Z INFO ExtHandler ExtHandler Mar 19 11:37:29.129136 waagent[1983]: 2025-03-19T11:37:29.129098Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 624cee75-3620-4837-abec-6cc805db8891 correlation e98f973a-1cf1-4b86-b802-a8dc0767c4bd created: 2025-03-19T11:37:19.347855Z] Mar 19 11:37:29.129507 waagent[1983]: 2025-03-19T11:37:29.129458Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 19 11:37:29.130322 waagent[1983]: 2025-03-19T11:37:29.130271Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 1 ms] Mar 19 11:37:35.163326 waagent[1983]: 2025-03-19T11:37:35.163216Z INFO ExtHandler Mar 19 11:37:35.163698 waagent[1983]: 2025-03-19T11:37:35.163429Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a7d67281-c9e2-4098-9699-bb7bd8c8bbf3 eTag: 13268719374091013792 source: Fabric] Mar 19 11:37:35.163914 waagent[1983]: 2025-03-19T11:37:35.163858Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 19 11:38:33.497470 systemd[1]: Started sshd@5-10.200.20.43:22-10.200.16.10:40032.service - OpenSSH per-connection server daemon (10.200.16.10:40032). Mar 19 11:38:33.943955 sshd[4511]: Accepted publickey for core from 10.200.16.10 port 40032 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:38:33.945928 sshd-session[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:38:33.953783 systemd-logind[1744]: New session 8 of user core. Mar 19 11:38:33.959527 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 19 11:38:34.383863 sshd[4513]: Connection closed by 10.200.16.10 port 40032 Mar 19 11:38:34.384476 sshd-session[4511]: pam_unix(sshd:session): session closed for user core Mar 19 11:38:34.388022 systemd[1]: sshd@5-10.200.20.43:22-10.200.16.10:40032.service: Deactivated successfully. Mar 19 11:38:34.390176 systemd[1]: session-8.scope: Deactivated successfully. Mar 19 11:38:34.392863 systemd-logind[1744]: Session 8 logged out. Waiting for processes to exit. Mar 19 11:38:34.394166 systemd-logind[1744]: Removed session 8. Mar 19 11:38:39.474190 systemd[1]: Started sshd@6-10.200.20.43:22-10.200.16.10:59082.service - OpenSSH per-connection server daemon (10.200.16.10:59082). Mar 19 11:38:39.918850 sshd[4547]: Accepted publickey for core from 10.200.16.10 port 59082 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:38:39.920170 sshd-session[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:38:39.924612 systemd-logind[1744]: New session 9 of user core. Mar 19 11:38:39.930412 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 19 11:38:40.316537 sshd[4549]: Connection closed by 10.200.16.10 port 59082 Mar 19 11:38:40.315830 sshd-session[4547]: pam_unix(sshd:session): session closed for user core Mar 19 11:38:40.319574 systemd[1]: sshd@6-10.200.20.43:22-10.200.16.10:59082.service: Deactivated successfully. Mar 19 11:38:40.321718 systemd[1]: session-9.scope: Deactivated successfully. Mar 19 11:38:40.322620 systemd-logind[1744]: Session 9 logged out. Waiting for processes to exit. Mar 19 11:38:40.323701 systemd-logind[1744]: Removed session 9. Mar 19 11:38:45.414557 systemd[1]: Started sshd@7-10.200.20.43:22-10.200.16.10:59094.service - OpenSSH per-connection server daemon (10.200.16.10:59094). Mar 19 11:38:45.902345 sshd[4583]: Accepted publickey for core from 10.200.16.10 port 59094 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:38:45.903879 sshd-session[4583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:38:45.908363 systemd-logind[1744]: New session 10 of user core. Mar 19 11:38:45.914383 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 19 11:38:46.329343 sshd[4585]: Connection closed by 10.200.16.10 port 59094 Mar 19 11:38:46.330125 sshd-session[4583]: pam_unix(sshd:session): session closed for user core Mar 19 11:38:46.332952 systemd[1]: sshd@7-10.200.20.43:22-10.200.16.10:59094.service: Deactivated successfully. Mar 19 11:38:46.335948 systemd[1]: session-10.scope: Deactivated successfully. Mar 19 11:38:46.337908 systemd-logind[1744]: Session 10 logged out. Waiting for processes to exit. Mar 19 11:38:46.339083 systemd-logind[1744]: Removed session 10. Mar 19 11:38:46.427576 systemd[1]: Started sshd@8-10.200.20.43:22-10.200.16.10:59106.service - OpenSSH per-connection server daemon (10.200.16.10:59106). Mar 19 11:38:46.910038 sshd[4598]: Accepted publickey for core from 10.200.16.10 port 59106 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:38:46.911379 sshd-session[4598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:38:46.915503 systemd-logind[1744]: New session 11 of user core. Mar 19 11:38:46.923387 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 19 11:38:47.353890 sshd[4621]: Connection closed by 10.200.16.10 port 59106 Mar 19 11:38:47.354658 sshd-session[4598]: pam_unix(sshd:session): session closed for user core Mar 19 11:38:47.358868 systemd-logind[1744]: Session 11 logged out. Waiting for processes to exit. Mar 19 11:38:47.359692 systemd[1]: sshd@8-10.200.20.43:22-10.200.16.10:59106.service: Deactivated successfully. Mar 19 11:38:47.362838 systemd[1]: session-11.scope: Deactivated successfully. Mar 19 11:38:47.364892 systemd-logind[1744]: Removed session 11. Mar 19 11:38:47.455041 systemd[1]: Started sshd@9-10.200.20.43:22-10.200.16.10:59120.service - OpenSSH per-connection server daemon (10.200.16.10:59120). Mar 19 11:38:47.938078 sshd[4631]: Accepted publickey for core from 10.200.16.10 port 59120 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:38:47.939395 sshd-session[4631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:38:47.944465 systemd-logind[1744]: New session 12 of user core. Mar 19 11:38:47.952402 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 19 11:38:48.344568 sshd[4633]: Connection closed by 10.200.16.10 port 59120 Mar 19 11:38:48.345110 sshd-session[4631]: pam_unix(sshd:session): session closed for user core Mar 19 11:38:48.349472 systemd[1]: sshd@9-10.200.20.43:22-10.200.16.10:59120.service: Deactivated successfully. Mar 19 11:38:48.351418 systemd[1]: session-12.scope: Deactivated successfully. Mar 19 11:38:48.352189 systemd-logind[1744]: Session 12 logged out. Waiting for processes to exit. Mar 19 11:38:48.353455 systemd-logind[1744]: Removed session 12. Mar 19 11:38:53.435505 systemd[1]: Started sshd@10-10.200.20.43:22-10.200.16.10:48846.service - OpenSSH per-connection server daemon (10.200.16.10:48846). Mar 19 11:38:53.881605 sshd[4668]: Accepted publickey for core from 10.200.16.10 port 48846 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:38:53.885106 sshd-session[4668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:38:53.889472 systemd-logind[1744]: New session 13 of user core. Mar 19 11:38:53.900450 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 19 11:38:54.278964 sshd[4670]: Connection closed by 10.200.16.10 port 48846 Mar 19 11:38:54.280792 sshd-session[4668]: pam_unix(sshd:session): session closed for user core Mar 19 11:38:54.284117 systemd[1]: sshd@10-10.200.20.43:22-10.200.16.10:48846.service: Deactivated successfully. Mar 19 11:38:54.286061 systemd[1]: session-13.scope: Deactivated successfully. Mar 19 11:38:54.287825 systemd-logind[1744]: Session 13 logged out. Waiting for processes to exit. Mar 19 11:38:54.288746 systemd-logind[1744]: Removed session 13. Mar 19 11:38:54.369827 systemd[1]: Started sshd@11-10.200.20.43:22-10.200.16.10:48862.service - OpenSSH per-connection server daemon (10.200.16.10:48862). Mar 19 11:38:54.815790 sshd[4682]: Accepted publickey for core from 10.200.16.10 port 48862 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:38:54.817666 sshd-session[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:38:54.822147 systemd-logind[1744]: New session 14 of user core. Mar 19 11:38:54.827415 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 19 11:38:55.332996 systemd[1]: Started sshd@12-10.200.20.43:22-10.200.16.10:48876.service - OpenSSH per-connection server daemon (10.200.16.10:48876). Mar 19 11:38:55.521618 sshd[4684]: Connection closed by 10.200.16.10 port 48862 Mar 19 11:38:55.522210 sshd-session[4682]: pam_unix(sshd:session): session closed for user core Mar 19 11:38:55.526184 systemd[1]: sshd@11-10.200.20.43:22-10.200.16.10:48862.service: Deactivated successfully. Mar 19 11:38:55.528031 systemd[1]: session-14.scope: Deactivated successfully. Mar 19 11:38:55.528834 systemd-logind[1744]: Session 14 logged out. Waiting for processes to exit. Mar 19 11:38:55.530093 systemd-logind[1744]: Removed session 14. Mar 19 11:38:55.825622 sshd[4690]: Accepted publickey for core from 10.200.16.10 port 48876 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:38:55.826926 sshd-session[4690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:38:55.832430 systemd-logind[1744]: New session 15 of user core. Mar 19 11:38:55.839415 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 19 11:38:57.481622 systemd[1]: Started sshd@13-10.200.20.43:22-10.200.16.10:48884.service - OpenSSH per-connection server daemon (10.200.16.10:48884). Mar 19 11:38:57.673394 sshd[4695]: Connection closed by 10.200.16.10 port 48876 Mar 19 11:38:57.673968 sshd-session[4690]: pam_unix(sshd:session): session closed for user core Mar 19 11:38:57.676981 systemd-logind[1744]: Session 15 logged out. Waiting for processes to exit. Mar 19 11:38:57.677930 systemd[1]: sshd@12-10.200.20.43:22-10.200.16.10:48876.service: Deactivated successfully. Mar 19 11:38:57.681075 systemd[1]: session-15.scope: Deactivated successfully. Mar 19 11:38:57.683387 systemd-logind[1744]: Removed session 15. Mar 19 11:38:57.927362 sshd[4730]: Accepted publickey for core from 10.200.16.10 port 48884 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:38:57.928683 sshd-session[4730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:38:57.933411 systemd-logind[1744]: New session 16 of user core. Mar 19 11:38:57.939440 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 19 11:38:58.432614 sshd[4737]: Connection closed by 10.200.16.10 port 48884 Mar 19 11:38:58.433006 sshd-session[4730]: pam_unix(sshd:session): session closed for user core Mar 19 11:38:58.436698 systemd[1]: sshd@13-10.200.20.43:22-10.200.16.10:48884.service: Deactivated successfully. Mar 19 11:38:58.439989 systemd[1]: session-16.scope: Deactivated successfully. Mar 19 11:38:58.440825 systemd-logind[1744]: Session 16 logged out. Waiting for processes to exit. Mar 19 11:38:58.441780 systemd-logind[1744]: Removed session 16. Mar 19 11:38:58.518087 systemd[1]: Started sshd@14-10.200.20.43:22-10.200.16.10:38660.service - OpenSSH per-connection server daemon (10.200.16.10:38660). Mar 19 11:38:58.977624 sshd[4747]: Accepted publickey for core from 10.200.16.10 port 38660 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:38:58.978954 sshd-session[4747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:38:58.983891 systemd-logind[1744]: New session 17 of user core. Mar 19 11:38:58.993423 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 19 11:38:59.369365 sshd[4749]: Connection closed by 10.200.16.10 port 38660 Mar 19 11:38:59.370094 sshd-session[4747]: pam_unix(sshd:session): session closed for user core Mar 19 11:38:59.374187 systemd[1]: sshd@14-10.200.20.43:22-10.200.16.10:38660.service: Deactivated successfully. Mar 19 11:38:59.376572 systemd[1]: session-17.scope: Deactivated successfully. Mar 19 11:38:59.378098 systemd-logind[1744]: Session 17 logged out. Waiting for processes to exit. Mar 19 11:38:59.379073 systemd-logind[1744]: Removed session 17. Mar 19 11:39:04.460541 systemd[1]: Started sshd@15-10.200.20.43:22-10.200.16.10:38666.service - OpenSSH per-connection server daemon (10.200.16.10:38666). Mar 19 11:39:04.908106 sshd[4784]: Accepted publickey for core from 10.200.16.10 port 38666 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:39:04.909433 sshd-session[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:39:04.913946 systemd-logind[1744]: New session 18 of user core. Mar 19 11:39:04.921427 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 19 11:39:05.300576 sshd[4786]: Connection closed by 10.200.16.10 port 38666 Mar 19 11:39:05.301305 sshd-session[4784]: pam_unix(sshd:session): session closed for user core Mar 19 11:39:05.304950 systemd[1]: sshd@15-10.200.20.43:22-10.200.16.10:38666.service: Deactivated successfully. Mar 19 11:39:05.308076 systemd[1]: session-18.scope: Deactivated successfully. Mar 19 11:39:05.308929 systemd-logind[1744]: Session 18 logged out. Waiting for processes to exit. Mar 19 11:39:05.309949 systemd-logind[1744]: Removed session 18. Mar 19 11:39:10.390477 systemd[1]: Started sshd@16-10.200.20.43:22-10.200.16.10:52030.service - OpenSSH per-connection server daemon (10.200.16.10:52030). Mar 19 11:39:10.833495 sshd[4818]: Accepted publickey for core from 10.200.16.10 port 52030 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:39:10.834706 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:39:10.839584 systemd-logind[1744]: New session 19 of user core. Mar 19 11:39:10.846579 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 19 11:39:11.227989 sshd[4820]: Connection closed by 10.200.16.10 port 52030 Mar 19 11:39:11.228564 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Mar 19 11:39:11.232553 systemd[1]: sshd@16-10.200.20.43:22-10.200.16.10:52030.service: Deactivated successfully. Mar 19 11:39:11.234680 systemd[1]: session-19.scope: Deactivated successfully. Mar 19 11:39:11.236760 systemd-logind[1744]: Session 19 logged out. Waiting for processes to exit. Mar 19 11:39:11.238159 systemd-logind[1744]: Removed session 19. Mar 19 11:39:16.319297 systemd[1]: Started sshd@17-10.200.20.43:22-10.200.16.10:52044.service - OpenSSH per-connection server daemon (10.200.16.10:52044). Mar 19 11:39:16.803031 sshd[4853]: Accepted publickey for core from 10.200.16.10 port 52044 ssh2: RSA SHA256:ScVfKVQZSDUHhq4rPeJDw3DfMan/+rU4LYzEFDsfkGk Mar 19 11:39:16.804460 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:39:16.808965 systemd-logind[1744]: New session 20 of user core. Mar 19 11:39:16.814436 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 19 11:39:17.212993 sshd[4861]: Connection closed by 10.200.16.10 port 52044 Mar 19 11:39:17.213666 sshd-session[4853]: pam_unix(sshd:session): session closed for user core Mar 19 11:39:17.217149 systemd[1]: sshd@17-10.200.20.43:22-10.200.16.10:52044.service: Deactivated successfully. Mar 19 11:39:17.220241 systemd[1]: session-20.scope: Deactivated successfully. Mar 19 11:39:17.223420 systemd-logind[1744]: Session 20 logged out. Waiting for processes to exit. Mar 19 11:39:17.224808 systemd-logind[1744]: Removed session 20.