Jan 30 13:24:46.374417 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 13:24:46.374440 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 30 13:24:46.374448 kernel: KASLR enabled Jan 30 13:24:46.374454 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 30 13:24:46.374461 kernel: printk: bootconsole [pl11] enabled Jan 30 13:24:46.374467 kernel: efi: EFI v2.7 by EDK II Jan 30 13:24:46.374474 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3eac7018 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jan 30 13:24:46.374480 kernel: random: crng init done Jan 30 13:24:46.374486 kernel: secureboot: Secure boot disabled Jan 30 13:24:46.374492 kernel: ACPI: Early table checksum verification disabled Jan 30 13:24:46.374498 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 30 13:24:46.374503 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:24:46.374509 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:24:46.374517 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 30 13:24:46.374524 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:24:46.374531 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:24:46.374537 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:24:46.374545 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:24:46.374551 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:24:46.374557 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:24:46.374563 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 30 13:24:46.374569 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:24:46.374575 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 30 13:24:46.374582 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 30 13:24:46.374588 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 30 13:24:46.374594 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 30 13:24:46.374600 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 30 13:24:46.374606 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 30 13:24:46.374614 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 30 13:24:46.374620 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 30 13:24:46.374626 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 30 13:24:46.374632 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 30 13:24:46.374638 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 30 13:24:46.374644 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 30 13:24:46.374650 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 30 13:24:46.374656 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 30 13:24:46.374662 kernel: Zone ranges: Jan 30 13:24:46.374668 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 30 13:24:46.374674 kernel: DMA32 empty Jan 30 13:24:46.374681 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 13:24:46.374697 kernel: Movable zone start for each node Jan 30 13:24:46.374704 kernel: Early memory node ranges Jan 30 13:24:46.374711 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 30 13:24:46.374717 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jan 30 13:24:46.374724 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jan 30 13:24:46.374732 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jan 30 13:24:46.374739 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 30 13:24:46.374745 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 30 13:24:46.374751 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 30 13:24:46.374758 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 30 13:24:46.374765 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 13:24:46.374771 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 30 13:24:46.374778 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 30 13:24:46.374784 kernel: psci: probing for conduit method from ACPI. Jan 30 13:24:46.374791 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 13:24:46.374797 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 13:24:46.374804 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 30 13:24:46.374812 kernel: psci: SMC Calling Convention v1.4 Jan 30 13:24:46.374818 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 30 13:24:46.374825 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 30 13:24:46.374831 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 13:24:46.374838 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 13:24:46.374844 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 13:24:46.374851 kernel: Detected PIPT I-cache on CPU0 Jan 30 13:24:46.374857 kernel: CPU features: detected: GIC system register CPU interface Jan 30 13:24:46.374864 kernel: CPU features: detected: Hardware dirty bit management Jan 30 13:24:46.374870 kernel: CPU features: detected: Spectre-BHB Jan 30 13:24:46.374877 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 13:24:46.374885 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 13:24:46.374891 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 13:24:46.374898 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 30 13:24:46.374904 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 13:24:46.374911 kernel: alternatives: applying boot alternatives Jan 30 13:24:46.374918 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:24:46.374925 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:24:46.374932 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:24:46.374939 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:24:46.374945 kernel: Fallback order for Node 0: 0 Jan 30 13:24:46.374952 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 30 13:24:46.374960 kernel: Policy zone: Normal Jan 30 13:24:46.374966 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:24:46.374972 kernel: software IO TLB: area num 2. Jan 30 13:24:46.374979 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jan 30 13:24:46.374986 kernel: Memory: 3982056K/4194160K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 212104K reserved, 0K cma-reserved) Jan 30 13:24:46.374992 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:24:46.374999 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:24:46.375006 kernel: rcu: RCU event tracing is enabled. Jan 30 13:24:46.375012 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:24:46.375019 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:24:46.375025 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:24:46.375033 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:24:46.375040 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:24:46.375047 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 13:24:46.375053 kernel: GICv3: 960 SPIs implemented Jan 30 13:24:46.375060 kernel: GICv3: 0 Extended SPIs implemented Jan 30 13:24:46.375066 kernel: Root IRQ handler: gic_handle_irq Jan 30 13:24:46.375073 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 13:24:46.375079 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 30 13:24:46.375086 kernel: ITS: No ITS available, not enabling LPIs Jan 30 13:24:46.375092 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:24:46.375099 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:24:46.375105 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 13:24:46.375113 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 13:24:46.375120 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 13:24:46.375127 kernel: Console: colour dummy device 80x25 Jan 30 13:24:46.375134 kernel: printk: console [tty1] enabled Jan 30 13:24:46.375140 kernel: ACPI: Core revision 20230628 Jan 30 13:24:46.375147 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 13:24:46.375154 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:24:46.375161 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:24:46.375167 kernel: landlock: Up and running. Jan 30 13:24:46.375175 kernel: SELinux: Initializing. Jan 30 13:24:46.375182 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:24:46.375189 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:24:46.375196 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:24:46.375203 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:24:46.375209 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 30 13:24:46.375216 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 30 13:24:46.375229 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 13:24:46.375236 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:24:46.375243 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:24:46.375250 kernel: Remapping and enabling EFI services. Jan 30 13:24:46.375257 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:24:46.375265 kernel: Detected PIPT I-cache on CPU1 Jan 30 13:24:46.375273 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 30 13:24:46.375280 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:24:46.375287 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 13:24:46.375294 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:24:46.375302 kernel: SMP: Total of 2 processors activated. Jan 30 13:24:46.375310 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 13:24:46.375317 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 30 13:24:46.375324 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 13:24:46.375331 kernel: CPU features: detected: CRC32 instructions Jan 30 13:24:46.375338 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 13:24:46.375345 kernel: CPU features: detected: LSE atomic instructions Jan 30 13:24:46.375352 kernel: CPU features: detected: Privileged Access Never Jan 30 13:24:46.375359 kernel: CPU: All CPU(s) started at EL1 Jan 30 13:24:46.375368 kernel: alternatives: applying system-wide alternatives Jan 30 13:24:46.375375 kernel: devtmpfs: initialized Jan 30 13:24:46.375392 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:24:46.375399 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:24:46.375406 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:24:46.375413 kernel: SMBIOS 3.1.0 present. Jan 30 13:24:46.375420 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 30 13:24:46.375427 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:24:46.375434 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 13:24:46.375443 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 13:24:46.375451 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 13:24:46.375458 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:24:46.375465 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 30 13:24:46.375472 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:24:46.375479 kernel: cpuidle: using governor menu Jan 30 13:24:46.375486 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 13:24:46.375493 kernel: ASID allocator initialised with 32768 entries Jan 30 13:24:46.375500 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:24:46.375508 kernel: Serial: AMBA PL011 UART driver Jan 30 13:24:46.375515 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 13:24:46.375522 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 13:24:46.375529 kernel: Modules: 508880 pages in range for PLT usage Jan 30 13:24:46.375536 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:24:46.375543 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:24:46.375550 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 13:24:46.375558 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 13:24:46.375564 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:24:46.375573 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:24:46.375581 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 13:24:46.375588 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 13:24:46.375595 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:24:46.375602 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:24:46.375609 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:24:46.375616 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:24:46.375623 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:24:46.375630 kernel: ACPI: Interpreter enabled Jan 30 13:24:46.375638 kernel: ACPI: Using GIC for interrupt routing Jan 30 13:24:46.375645 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 30 13:24:46.375652 kernel: printk: console [ttyAMA0] enabled Jan 30 13:24:46.375659 kernel: printk: bootconsole [pl11] disabled Jan 30 13:24:46.375666 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 30 13:24:46.375673 kernel: iommu: Default domain type: Translated Jan 30 13:24:46.375681 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 13:24:46.375688 kernel: efivars: Registered efivars operations Jan 30 13:24:46.375694 kernel: vgaarb: loaded Jan 30 13:24:46.375703 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 13:24:46.375710 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:24:46.375718 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:24:46.375725 kernel: pnp: PnP ACPI init Jan 30 13:24:46.375732 kernel: pnp: PnP ACPI: found 0 devices Jan 30 13:24:46.375738 kernel: NET: Registered PF_INET protocol family Jan 30 13:24:46.375745 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:24:46.375752 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:24:46.375760 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:24:46.375768 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:24:46.375776 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:24:46.375783 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:24:46.375790 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:24:46.375797 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:24:46.375804 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:24:46.375811 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:24:46.375818 kernel: kvm [1]: HYP mode not available Jan 30 13:24:46.375825 kernel: Initialise system trusted keyrings Jan 30 13:24:46.375833 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:24:46.375840 kernel: Key type asymmetric registered Jan 30 13:24:46.375847 kernel: Asymmetric key parser 'x509' registered Jan 30 13:24:46.375854 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 13:24:46.375861 kernel: io scheduler mq-deadline registered Jan 30 13:24:46.375868 kernel: io scheduler kyber registered Jan 30 13:24:46.375875 kernel: io scheduler bfq registered Jan 30 13:24:46.375882 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:24:46.375889 kernel: thunder_xcv, ver 1.0 Jan 30 13:24:46.375897 kernel: thunder_bgx, ver 1.0 Jan 30 13:24:46.375904 kernel: nicpf, ver 1.0 Jan 30 13:24:46.375911 kernel: nicvf, ver 1.0 Jan 30 13:24:46.376041 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 13:24:46.376113 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T13:24:45 UTC (1738243485) Jan 30 13:24:46.376123 kernel: efifb: probing for efifb Jan 30 13:24:46.376131 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 13:24:46.376138 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 13:24:46.376147 kernel: efifb: scrolling: redraw Jan 30 13:24:46.376154 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 13:24:46.376161 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:24:46.376168 kernel: fb0: EFI VGA frame buffer device Jan 30 13:24:46.376175 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 30 13:24:46.376182 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:24:46.376190 kernel: No ACPI PMU IRQ for CPU0 Jan 30 13:24:46.376197 kernel: No ACPI PMU IRQ for CPU1 Jan 30 13:24:46.376204 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 30 13:24:46.376212 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 13:24:46.376220 kernel: watchdog: Hard watchdog permanently disabled Jan 30 13:24:46.376227 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:24:46.376234 kernel: Segment Routing with IPv6 Jan 30 13:24:46.376241 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:24:46.376248 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:24:46.376255 kernel: Key type dns_resolver registered Jan 30 13:24:46.376262 kernel: registered taskstats version 1 Jan 30 13:24:46.376269 kernel: Loading compiled-in X.509 certificates Jan 30 13:24:46.376278 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 30 13:24:46.376285 kernel: Key type .fscrypt registered Jan 30 13:24:46.376292 kernel: Key type fscrypt-provisioning registered Jan 30 13:24:46.376299 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:24:46.376306 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:24:46.376313 kernel: ima: No architecture policies found Jan 30 13:24:46.376320 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 13:24:46.376327 kernel: clk: Disabling unused clocks Jan 30 13:24:46.376334 kernel: Freeing unused kernel memory: 39936K Jan 30 13:24:46.376343 kernel: Run /init as init process Jan 30 13:24:46.376350 kernel: with arguments: Jan 30 13:24:46.376356 kernel: /init Jan 30 13:24:46.376364 kernel: with environment: Jan 30 13:24:46.376370 kernel: HOME=/ Jan 30 13:24:46.376389 kernel: TERM=linux Jan 30 13:24:46.376397 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:24:46.376407 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:24:46.376418 systemd[1]: Detected virtualization microsoft. Jan 30 13:24:46.376425 systemd[1]: Detected architecture arm64. Jan 30 13:24:46.376433 systemd[1]: Running in initrd. Jan 30 13:24:46.376440 systemd[1]: No hostname configured, using default hostname. Jan 30 13:24:46.376447 systemd[1]: Hostname set to . Jan 30 13:24:46.376455 systemd[1]: Initializing machine ID from random generator. Jan 30 13:24:46.376462 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:24:46.376470 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:24:46.376479 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:24:46.376488 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:24:46.376495 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:24:46.376503 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:24:46.376511 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:24:46.376520 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:24:46.376529 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:24:46.376537 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:24:46.376545 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:24:46.376553 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:24:46.376560 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:24:46.376568 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:24:46.376575 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:24:46.376583 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:24:46.376590 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:24:46.376599 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:24:46.376607 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:24:46.376615 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:24:46.376623 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:24:46.376631 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:24:46.376639 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:24:46.376646 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:24:46.376654 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:24:46.376663 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:24:46.376671 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:24:46.376679 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:24:46.376686 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:24:46.376709 systemd-journald[218]: Collecting audit messages is disabled. Jan 30 13:24:46.376729 systemd-journald[218]: Journal started Jan 30 13:24:46.376753 systemd-journald[218]: Runtime Journal (/run/log/journal/2468697717f64c30af0ddc1761fd502a) is 8.0M, max 78.5M, 70.5M free. Jan 30 13:24:46.382072 systemd-modules-load[219]: Inserted module 'overlay' Jan 30 13:24:46.419343 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:24:46.419369 kernel: Bridge firewalling registered Jan 30 13:24:46.419403 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:24:46.406369 systemd-modules-load[219]: Inserted module 'br_netfilter' Jan 30 13:24:46.443970 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:24:46.445713 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:24:46.453564 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:24:46.470425 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:24:46.484641 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:24:46.492936 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:24:46.519747 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:24:46.537591 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:24:46.552210 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:24:46.578622 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:24:46.594660 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:24:46.603915 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:24:46.618511 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:24:46.632543 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:24:46.659625 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:24:46.677946 dracut-cmdline[250]: dracut-dracut-053 Jan 30 13:24:46.678062 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:24:46.698588 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:24:46.736530 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:24:46.765459 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:24:46.775002 systemd-resolved[260]: Positive Trust Anchors: Jan 30 13:24:46.775013 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:24:46.775044 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:24:46.777247 systemd-resolved[260]: Defaulting to hostname 'linux'. Jan 30 13:24:46.778651 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:24:46.793686 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:24:46.883398 kernel: SCSI subsystem initialized Jan 30 13:24:46.891399 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:24:46.902412 kernel: iscsi: registered transport (tcp) Jan 30 13:24:46.921775 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:24:46.921820 kernel: QLogic iSCSI HBA Driver Jan 30 13:24:46.960648 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:24:46.980647 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:24:47.016041 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:24:47.016086 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:24:47.023811 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:24:47.073418 kernel: raid6: neonx8 gen() 15769 MB/s Jan 30 13:24:47.094396 kernel: raid6: neonx4 gen() 15817 MB/s Jan 30 13:24:47.115388 kernel: raid6: neonx2 gen() 13209 MB/s Jan 30 13:24:47.137391 kernel: raid6: neonx1 gen() 10426 MB/s Jan 30 13:24:47.157388 kernel: raid6: int64x8 gen() 6791 MB/s Jan 30 13:24:47.178399 kernel: raid6: int64x4 gen() 7357 MB/s Jan 30 13:24:47.200391 kernel: raid6: int64x2 gen() 6109 MB/s Jan 30 13:24:47.225256 kernel: raid6: int64x1 gen() 5059 MB/s Jan 30 13:24:47.225267 kernel: raid6: using algorithm neonx4 gen() 15817 MB/s Jan 30 13:24:47.252990 kernel: raid6: .... xor() 12355 MB/s, rmw enabled Jan 30 13:24:47.253002 kernel: raid6: using neon recovery algorithm Jan 30 13:24:47.265260 kernel: xor: measuring software checksum speed Jan 30 13:24:47.265275 kernel: 8regs : 21584 MB/sec Jan 30 13:24:47.269650 kernel: 32regs : 21641 MB/sec Jan 30 13:24:47.273845 kernel: arm64_neon : 27860 MB/sec Jan 30 13:24:47.278843 kernel: xor: using function: arm64_neon (27860 MB/sec) Jan 30 13:24:47.329401 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:24:47.338697 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:24:47.355507 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:24:47.380053 systemd-udevd[438]: Using default interface naming scheme 'v255'. Jan 30 13:24:47.386304 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:24:47.409669 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:24:47.424989 dracut-pre-trigger[446]: rd.md=0: removing MD RAID activation Jan 30 13:24:47.451283 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:24:47.467627 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:24:47.507853 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:24:47.532588 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:24:47.552672 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:24:47.573049 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:24:47.595607 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:24:47.612400 kernel: hv_vmbus: Vmbus version:5.3 Jan 30 13:24:47.613391 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:24:47.633729 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 13:24:47.652395 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 30 13:24:47.652447 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 13:24:47.658652 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:24:47.658670 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 13:24:47.658412 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:24:47.682757 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:24:47.682780 kernel: PTP clock support registered Jan 30 13:24:47.698405 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 13:24:47.692582 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:24:47.743477 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 30 13:24:47.743506 kernel: hv_vmbus: registering driver hv_utils Jan 30 13:24:47.743517 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 13:24:47.743526 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 13:24:47.743544 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 13:24:47.743553 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 13:24:47.692749 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:24:47.730121 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:24:47.962289 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 13:24:47.962320 kernel: scsi host0: storvsc_host_t Jan 30 13:24:47.962549 kernel: scsi host1: storvsc_host_t Jan 30 13:24:47.918956 systemd-resolved[260]: Clock change detected. Flushing caches. Jan 30 13:24:47.999015 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 13:24:47.999194 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 13:24:47.923280 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:24:47.923509 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:24:47.940361 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:24:48.057167 kernel: hv_netvsc 0022487b-0ab0-0022-487b-0ab00022487b eth0: VF slot 1 added Jan 30 13:24:47.963271 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:24:48.088318 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 30 13:24:48.109583 kernel: hv_vmbus: registering driver hv_pci Jan 30 13:24:48.109599 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:24:48.109609 kernel: hv_pci 8138cb8e-bc32-4a28-b6dd-bdc72ff90e7b: PCI VMBus probing: Using version 0x10004 Jan 30 13:24:48.291306 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 30 13:24:48.291443 kernel: hv_pci 8138cb8e-bc32-4a28-b6dd-bdc72ff90e7b: PCI host bridge to bus bc32:00 Jan 30 13:24:48.291526 kernel: pci_bus bc32:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 30 13:24:48.291662 kernel: pci_bus bc32:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 13:24:48.291752 kernel: pci bc32:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 30 13:24:48.291847 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 13:24:48.291989 kernel: pci bc32:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 13:24:48.292096 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:24:48.292181 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:24:48.292262 kernel: pci bc32:00:02.0: enabling Extended Tags Jan 30 13:24:48.292368 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 13:24:48.292475 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 13:24:48.292566 kernel: pci bc32:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at bc32:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 30 13:24:48.292664 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:24:48.292676 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:24:48.292765 kernel: pci_bus bc32:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 13:24:48.292843 kernel: pci bc32:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 13:24:47.992825 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:24:48.000538 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:24:48.009786 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:24:48.010076 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:24:48.021773 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:24:48.056319 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:24:48.090275 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:24:48.132371 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:24:48.217567 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:24:48.380870 kernel: mlx5_core bc32:00:02.0: enabling device (0000 -> 0002) Jan 30 13:24:48.676014 kernel: mlx5_core bc32:00:02.0: firmware version: 16.30.1284 Jan 30 13:24:48.676182 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (500) Jan 30 13:24:48.676195 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (504) Jan 30 13:24:48.676205 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:24:48.676221 kernel: hv_netvsc 0022487b-0ab0-0022-487b-0ab00022487b eth0: VF registering: eth1 Jan 30 13:24:48.676315 kernel: mlx5_core bc32:00:02.0 eth1: joined to eth0 Jan 30 13:24:48.676411 kernel: mlx5_core bc32:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 30 13:24:48.469580 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 13:24:48.528834 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 13:24:48.542389 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:24:48.559576 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 13:24:48.566944 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 13:24:48.736486 kernel: mlx5_core bc32:00:02.0 enP48178s1: renamed from eth1 Jan 30 13:24:48.588094 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:24:49.632645 disk-uuid[602]: The operation has completed successfully. Jan 30 13:24:49.638026 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:24:49.707826 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:24:49.709947 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:24:49.731112 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:24:49.745689 sh[690]: Success Jan 30 13:24:49.766962 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 13:24:49.843329 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:24:49.853045 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:24:49.862864 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:24:49.905096 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 30 13:24:49.905150 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:24:49.912491 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:24:49.919170 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:24:49.924663 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:24:49.999677 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:24:50.005594 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:24:50.027179 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:24:50.062254 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:24:50.062317 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:24:50.068836 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:24:50.063080 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:24:50.100570 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:24:50.108606 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:24:50.124998 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:24:50.134956 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:24:50.153154 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:24:50.161657 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:24:50.183172 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:24:50.216136 systemd-networkd[874]: lo: Link UP Jan 30 13:24:50.216147 systemd-networkd[874]: lo: Gained carrier Jan 30 13:24:50.218263 systemd-networkd[874]: Enumeration completed Jan 30 13:24:50.218371 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:24:50.223974 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:24:50.223978 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:24:50.233152 systemd[1]: Reached target network.target - Network. Jan 30 13:24:50.297938 kernel: mlx5_core bc32:00:02.0 enP48178s1: Link up Jan 30 13:24:50.340024 kernel: hv_netvsc 0022487b-0ab0-0022-487b-0ab00022487b eth0: Data path switched to VF: enP48178s1 Jan 30 13:24:50.340502 systemd-networkd[874]: enP48178s1: Link UP Jan 30 13:24:50.340776 systemd-networkd[874]: eth0: Link UP Jan 30 13:24:50.341188 systemd-networkd[874]: eth0: Gained carrier Jan 30 13:24:50.341198 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:24:50.352170 systemd-networkd[874]: enP48178s1: Gained carrier Jan 30 13:24:50.380008 systemd-networkd[874]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 13:24:50.417808 ignition[872]: Ignition 2.20.0 Jan 30 13:24:50.417820 ignition[872]: Stage: fetch-offline Jan 30 13:24:50.417857 ignition[872]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:24:50.425774 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:24:50.417866 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:24:50.417989 ignition[872]: parsed url from cmdline: "" Jan 30 13:24:50.417992 ignition[872]: no config URL provided Jan 30 13:24:50.417997 ignition[872]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:24:50.455096 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:24:50.418005 ignition[872]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:24:50.418010 ignition[872]: failed to fetch config: resource requires networking Jan 30 13:24:50.418187 ignition[872]: Ignition finished successfully Jan 30 13:24:50.473523 ignition[883]: Ignition 2.20.0 Jan 30 13:24:50.473531 ignition[883]: Stage: fetch Jan 30 13:24:50.473734 ignition[883]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:24:50.473744 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:24:50.473852 ignition[883]: parsed url from cmdline: "" Jan 30 13:24:50.473856 ignition[883]: no config URL provided Jan 30 13:24:50.473861 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:24:50.473870 ignition[883]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:24:50.473899 ignition[883]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 13:24:50.567946 ignition[883]: GET result: OK Jan 30 13:24:50.568052 ignition[883]: config has been read from IMDS userdata Jan 30 13:24:50.568104 ignition[883]: parsing config with SHA512: 15563e9d77acfff87bb9e33092d78b3e3666b8f75b736000875c676ed152ec4d02b35f75e89cf5e0f180f9f8d7400a0d476dedb62c37c8e98b99d47da9da3d8d Jan 30 13:24:50.572807 unknown[883]: fetched base config from "system" Jan 30 13:24:50.573254 ignition[883]: fetch: fetch complete Jan 30 13:24:50.572814 unknown[883]: fetched base config from "system" Jan 30 13:24:50.573259 ignition[883]: fetch: fetch passed Jan 30 13:24:50.572819 unknown[883]: fetched user config from "azure" Jan 30 13:24:50.573301 ignition[883]: Ignition finished successfully Jan 30 13:24:50.578504 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:24:50.600579 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:24:50.635249 ignition[889]: Ignition 2.20.0 Jan 30 13:24:50.635258 ignition[889]: Stage: kargs Jan 30 13:24:50.640998 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:24:50.635421 ignition[889]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:24:50.635430 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:24:50.636316 ignition[889]: kargs: kargs passed Jan 30 13:24:50.667175 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:24:50.636357 ignition[889]: Ignition finished successfully Jan 30 13:24:50.691230 ignition[896]: Ignition 2.20.0 Jan 30 13:24:50.691242 ignition[896]: Stage: disks Jan 30 13:24:50.695717 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:24:50.691407 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:24:50.705235 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:24:50.691416 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:24:50.716637 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:24:50.692363 ignition[896]: disks: disks passed Jan 30 13:24:50.728597 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:24:50.692408 ignition[896]: Ignition finished successfully Jan 30 13:24:50.740858 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:24:50.753700 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:24:50.781172 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:24:50.813538 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 13:24:50.822411 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:24:50.842118 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:24:50.901940 kernel: EXT4-fs (sda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 30 13:24:50.903252 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:24:50.908327 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:24:50.939006 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:24:50.950038 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:24:50.965436 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:24:50.979990 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:24:50.985004 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:24:51.008596 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:24:51.030433 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (915) Jan 30 13:24:51.030467 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:24:51.042561 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:24:51.047256 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:24:51.065590 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:24:51.059882 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:24:51.073619 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:24:51.164998 coreos-metadata[917]: Jan 30 13:24:51.164 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:24:51.178262 coreos-metadata[917]: Jan 30 13:24:51.178 INFO Fetch successful Jan 30 13:24:51.184522 coreos-metadata[917]: Jan 30 13:24:51.178 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:24:51.196553 coreos-metadata[917]: Jan 30 13:24:51.192 INFO Fetch successful Jan 30 13:24:51.196553 coreos-metadata[917]: Jan 30 13:24:51.196 INFO wrote hostname ci-4186.1.0-a-a27a4db638 to /sysroot/etc/hostname Jan 30 13:24:51.202962 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:24:51.280308 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:24:51.300505 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:24:51.309978 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:24:51.318990 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:24:51.597856 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:24:51.616077 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:24:51.623634 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:24:51.644483 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:24:51.657474 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:24:51.658255 systemd-networkd[874]: enP48178s1: Gained IPv6LL Jan 30 13:24:51.682544 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:24:51.695862 ignition[1034]: INFO : Ignition 2.20.0 Jan 30 13:24:51.695862 ignition[1034]: INFO : Stage: mount Jan 30 13:24:51.695862 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:24:51.695862 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:24:51.695862 ignition[1034]: INFO : mount: mount passed Jan 30 13:24:51.695862 ignition[1034]: INFO : Ignition finished successfully Jan 30 13:24:51.696411 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:24:51.716087 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:24:51.913077 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:24:51.935112 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1045) Jan 30 13:24:51.948212 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:24:51.948270 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:24:51.952602 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:24:51.961007 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:24:51.962347 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:24:51.992470 ignition[1062]: INFO : Ignition 2.20.0 Jan 30 13:24:51.992470 ignition[1062]: INFO : Stage: files Jan 30 13:24:51.992470 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:24:51.992470 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:24:51.992470 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:24:52.020446 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:24:52.020446 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:24:52.020446 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:24:52.020446 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:24:52.053369 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:24:52.053369 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:24:52.053369 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 13:24:52.028204 unknown[1062]: wrote ssh authorized keys file for user: core Jan 30 13:24:52.101684 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:24:52.197230 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:24:52.197230 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:24:52.218605 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 30 13:24:52.362035 systemd-networkd[874]: eth0: Gained IPv6LL Jan 30 13:24:52.682219 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:24:52.879900 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:24:52.879900 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:24:52.879900 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 13:24:53.166641 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:24:53.350119 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:24:53.350119 ignition[1062]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:24:53.371182 ignition[1062]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:24:53.371182 ignition[1062]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:24:53.371182 ignition[1062]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:24:53.371182 ignition[1062]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:24:53.371182 ignition[1062]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:24:53.371182 ignition[1062]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:24:53.371182 ignition[1062]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:24:53.371182 ignition[1062]: INFO : files: files passed Jan 30 13:24:53.371182 ignition[1062]: INFO : Ignition finished successfully Jan 30 13:24:53.371017 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:24:53.401577 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:24:53.425087 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:24:53.464766 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:24:53.500680 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:24:53.500680 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:24:53.464872 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:24:53.537882 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:24:53.499070 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:24:53.507887 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:24:53.547193 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:24:53.597457 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:24:53.597581 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:24:53.611448 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:24:53.624204 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:24:53.635760 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:24:53.651173 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:24:53.674625 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:24:53.694168 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:24:53.713745 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:24:53.713873 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:24:53.728368 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:24:53.741663 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:24:53.754779 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:24:53.766012 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:24:53.766084 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:24:53.782857 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:24:53.789165 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:24:53.801001 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:24:53.813691 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:24:53.828917 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:24:53.841471 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:24:53.854442 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:24:53.867979 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:24:53.879228 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:24:53.892119 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:24:53.904971 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:24:53.905055 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:24:53.920956 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:24:53.933006 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:24:53.945709 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:24:53.945764 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:24:53.958218 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:24:53.958291 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:24:53.976592 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:24:53.976649 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:24:53.983972 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:24:53.984018 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:24:53.997986 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:24:54.082014 ignition[1116]: INFO : Ignition 2.20.0 Jan 30 13:24:54.082014 ignition[1116]: INFO : Stage: umount Jan 30 13:24:54.082014 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:24:54.082014 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:24:54.082014 ignition[1116]: INFO : umount: umount passed Jan 30 13:24:54.082014 ignition[1116]: INFO : Ignition finished successfully Jan 30 13:24:53.998042 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:24:54.032154 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:24:54.044670 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:24:54.057982 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:24:54.058063 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:24:54.074710 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:24:54.074774 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:24:54.087759 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:24:54.092337 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:24:54.104990 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:24:54.105309 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:24:54.105348 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:24:54.118662 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:24:54.118728 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:24:54.129071 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:24:54.129121 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:24:54.140009 systemd[1]: Stopped target network.target - Network. Jan 30 13:24:54.154963 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:24:54.155031 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:24:54.167322 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:24:54.180053 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:24:54.191956 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:24:54.199850 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:24:54.211096 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:24:54.228199 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:24:54.228252 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:24:54.238854 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:24:54.238899 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:24:54.250899 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:24:54.250975 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:24:54.265148 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:24:54.265200 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:24:54.277179 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:24:54.290994 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:24:54.319094 systemd-networkd[874]: eth0: DHCPv6 lease lost Jan 30 13:24:54.532327 kernel: hv_netvsc 0022487b-0ab0-0022-487b-0ab00022487b eth0: Data path switched from VF: enP48178s1 Jan 30 13:24:54.320260 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:24:54.320417 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:24:54.339175 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:24:54.339314 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:24:54.350675 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:24:54.350737 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:24:54.382034 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:24:54.392700 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:24:54.392772 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:24:54.407629 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:24:54.407687 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:24:54.418988 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:24:54.419040 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:24:54.430462 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:24:54.430508 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:24:54.443856 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:24:54.469608 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:24:54.469773 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:24:54.477494 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:24:54.477548 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:24:54.490526 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:24:54.490573 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:24:54.504616 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:24:54.504666 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:24:54.526558 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:24:54.526629 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:24:54.543630 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:24:54.543699 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:24:54.823494 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jan 30 13:24:54.582159 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:24:54.599067 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:24:54.599144 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:24:54.612262 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:24:54.612321 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:24:54.628572 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:24:54.628623 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:24:54.644614 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:24:54.644662 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:24:54.659502 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:24:54.659601 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:24:54.671579 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:24:54.671675 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:24:54.685185 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:24:54.687350 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:24:54.699623 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:24:54.713244 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:24:54.713341 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:24:54.743151 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:24:54.764040 systemd[1]: Switching root. Jan 30 13:24:54.965630 systemd-journald[218]: Journal stopped Jan 30 13:24:46.374417 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 13:24:46.374440 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 30 13:24:46.374448 kernel: KASLR enabled Jan 30 13:24:46.374454 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 30 13:24:46.374461 kernel: printk: bootconsole [pl11] enabled Jan 30 13:24:46.374467 kernel: efi: EFI v2.7 by EDK II Jan 30 13:24:46.374474 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3eac7018 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jan 30 13:24:46.374480 kernel: random: crng init done Jan 30 13:24:46.374486 kernel: secureboot: Secure boot disabled Jan 30 13:24:46.374492 kernel: ACPI: Early table checksum verification disabled Jan 30 13:24:46.374498 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 30 13:24:46.374503 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:24:46.374509 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:24:46.374517 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 30 13:24:46.374524 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:24:46.374531 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:24:46.374537 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:24:46.374545 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:24:46.374551 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:24:46.374557 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:24:46.374563 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 30 13:24:46.374569 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:24:46.374575 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 30 13:24:46.374582 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 30 13:24:46.374588 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 30 13:24:46.374594 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 30 13:24:46.374600 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 30 13:24:46.374606 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 30 13:24:46.374614 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 30 13:24:46.374620 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 30 13:24:46.374626 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 30 13:24:46.374632 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 30 13:24:46.374638 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 30 13:24:46.374644 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 30 13:24:46.374650 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 30 13:24:46.374656 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 30 13:24:46.374662 kernel: Zone ranges: Jan 30 13:24:46.374668 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 30 13:24:46.374674 kernel: DMA32 empty Jan 30 13:24:46.374681 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 13:24:46.374697 kernel: Movable zone start for each node Jan 30 13:24:46.374704 kernel: Early memory node ranges Jan 30 13:24:46.374711 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 30 13:24:46.374717 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jan 30 13:24:46.374724 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jan 30 13:24:46.374732 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jan 30 13:24:46.374739 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 30 13:24:46.374745 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 30 13:24:46.374751 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 30 13:24:46.374758 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 30 13:24:46.374765 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 13:24:46.374771 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 30 13:24:46.374778 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 30 13:24:46.374784 kernel: psci: probing for conduit method from ACPI. Jan 30 13:24:46.374791 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 13:24:46.374797 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 13:24:46.374804 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 30 13:24:46.374812 kernel: psci: SMC Calling Convention v1.4 Jan 30 13:24:46.374818 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 30 13:24:46.374825 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 30 13:24:46.374831 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 13:24:46.374838 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 13:24:46.374844 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 13:24:46.374851 kernel: Detected PIPT I-cache on CPU0 Jan 30 13:24:46.374857 kernel: CPU features: detected: GIC system register CPU interface Jan 30 13:24:46.374864 kernel: CPU features: detected: Hardware dirty bit management Jan 30 13:24:46.374870 kernel: CPU features: detected: Spectre-BHB Jan 30 13:24:46.374877 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 13:24:46.374885 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 13:24:46.374891 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 13:24:46.374898 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 30 13:24:46.374904 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 13:24:46.374911 kernel: alternatives: applying boot alternatives Jan 30 13:24:46.374918 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:24:46.374925 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:24:46.374932 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:24:46.374939 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:24:46.374945 kernel: Fallback order for Node 0: 0 Jan 30 13:24:46.374952 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 30 13:24:46.374960 kernel: Policy zone: Normal Jan 30 13:24:46.374966 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:24:46.374972 kernel: software IO TLB: area num 2. Jan 30 13:24:46.374979 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jan 30 13:24:46.374986 kernel: Memory: 3982056K/4194160K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 212104K reserved, 0K cma-reserved) Jan 30 13:24:46.374992 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:24:46.374999 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:24:46.375006 kernel: rcu: RCU event tracing is enabled. Jan 30 13:24:46.375012 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:24:46.375019 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:24:46.375025 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:24:46.375033 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:24:46.375040 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:24:46.375047 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 13:24:46.375053 kernel: GICv3: 960 SPIs implemented Jan 30 13:24:46.375060 kernel: GICv3: 0 Extended SPIs implemented Jan 30 13:24:46.375066 kernel: Root IRQ handler: gic_handle_irq Jan 30 13:24:46.375073 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 13:24:46.375079 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 30 13:24:46.375086 kernel: ITS: No ITS available, not enabling LPIs Jan 30 13:24:46.375092 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:24:46.375099 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:24:46.375105 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 13:24:46.375113 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 13:24:46.375120 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 13:24:46.375127 kernel: Console: colour dummy device 80x25 Jan 30 13:24:46.375134 kernel: printk: console [tty1] enabled Jan 30 13:24:46.375140 kernel: ACPI: Core revision 20230628 Jan 30 13:24:46.375147 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 13:24:46.375154 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:24:46.375161 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:24:46.375167 kernel: landlock: Up and running. Jan 30 13:24:46.375175 kernel: SELinux: Initializing. Jan 30 13:24:46.375182 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:24:46.375189 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:24:46.375196 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:24:46.375203 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:24:46.375209 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 30 13:24:46.375216 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 30 13:24:46.375229 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 13:24:46.375236 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:24:46.375243 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:24:46.375250 kernel: Remapping and enabling EFI services. Jan 30 13:24:46.375257 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:24:46.375265 kernel: Detected PIPT I-cache on CPU1 Jan 30 13:24:46.375273 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 30 13:24:46.375280 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:24:46.375287 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 13:24:46.375294 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:24:46.375302 kernel: SMP: Total of 2 processors activated. Jan 30 13:24:46.375310 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 13:24:46.375317 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 30 13:24:46.375324 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 13:24:46.375331 kernel: CPU features: detected: CRC32 instructions Jan 30 13:24:46.375338 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 13:24:46.375345 kernel: CPU features: detected: LSE atomic instructions Jan 30 13:24:46.375352 kernel: CPU features: detected: Privileged Access Never Jan 30 13:24:46.375359 kernel: CPU: All CPU(s) started at EL1 Jan 30 13:24:46.375368 kernel: alternatives: applying system-wide alternatives Jan 30 13:24:46.375375 kernel: devtmpfs: initialized Jan 30 13:24:46.375392 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:24:46.375399 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:24:46.375406 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:24:46.375413 kernel: SMBIOS 3.1.0 present. Jan 30 13:24:46.375420 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 30 13:24:46.375427 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:24:46.375434 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 13:24:46.375443 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 13:24:46.375451 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 13:24:46.375458 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:24:46.375465 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 30 13:24:46.375472 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:24:46.375479 kernel: cpuidle: using governor menu Jan 30 13:24:46.375486 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 13:24:46.375493 kernel: ASID allocator initialised with 32768 entries Jan 30 13:24:46.375500 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:24:46.375508 kernel: Serial: AMBA PL011 UART driver Jan 30 13:24:46.375515 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 13:24:46.375522 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 13:24:46.375529 kernel: Modules: 508880 pages in range for PLT usage Jan 30 13:24:46.375536 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:24:46.375543 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:24:46.375550 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 13:24:46.375558 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 13:24:46.375564 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:24:46.375573 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:24:46.375581 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 13:24:46.375588 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 13:24:46.375595 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:24:46.375602 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:24:46.375609 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:24:46.375616 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:24:46.375623 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:24:46.375630 kernel: ACPI: Interpreter enabled Jan 30 13:24:46.375638 kernel: ACPI: Using GIC for interrupt routing Jan 30 13:24:46.375645 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 30 13:24:46.375652 kernel: printk: console [ttyAMA0] enabled Jan 30 13:24:46.375659 kernel: printk: bootconsole [pl11] disabled Jan 30 13:24:46.375666 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 30 13:24:46.375673 kernel: iommu: Default domain type: Translated Jan 30 13:24:46.375681 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 13:24:46.375688 kernel: efivars: Registered efivars operations Jan 30 13:24:46.375694 kernel: vgaarb: loaded Jan 30 13:24:46.375703 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 13:24:46.375710 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:24:46.375718 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:24:46.375725 kernel: pnp: PnP ACPI init Jan 30 13:24:46.375732 kernel: pnp: PnP ACPI: found 0 devices Jan 30 13:24:46.375738 kernel: NET: Registered PF_INET protocol family Jan 30 13:24:46.375745 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:24:46.375752 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:24:46.375760 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:24:46.375768 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:24:46.375776 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:24:46.375783 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:24:46.375790 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:24:46.375797 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:24:46.375804 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:24:46.375811 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:24:46.375818 kernel: kvm [1]: HYP mode not available Jan 30 13:24:46.375825 kernel: Initialise system trusted keyrings Jan 30 13:24:46.375833 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:24:46.375840 kernel: Key type asymmetric registered Jan 30 13:24:46.375847 kernel: Asymmetric key parser 'x509' registered Jan 30 13:24:46.375854 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 13:24:46.375861 kernel: io scheduler mq-deadline registered Jan 30 13:24:46.375868 kernel: io scheduler kyber registered Jan 30 13:24:46.375875 kernel: io scheduler bfq registered Jan 30 13:24:46.375882 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:24:46.375889 kernel: thunder_xcv, ver 1.0 Jan 30 13:24:46.375897 kernel: thunder_bgx, ver 1.0 Jan 30 13:24:46.375904 kernel: nicpf, ver 1.0 Jan 30 13:24:46.375911 kernel: nicvf, ver 1.0 Jan 30 13:24:46.376041 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 13:24:46.376113 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T13:24:45 UTC (1738243485) Jan 30 13:24:46.376123 kernel: efifb: probing for efifb Jan 30 13:24:46.376131 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 13:24:46.376138 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 13:24:46.376147 kernel: efifb: scrolling: redraw Jan 30 13:24:46.376154 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 13:24:46.376161 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:24:46.376168 kernel: fb0: EFI VGA frame buffer device Jan 30 13:24:46.376175 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 30 13:24:46.376182 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:24:46.376190 kernel: No ACPI PMU IRQ for CPU0 Jan 30 13:24:46.376197 kernel: No ACPI PMU IRQ for CPU1 Jan 30 13:24:46.376204 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 30 13:24:46.376212 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 13:24:46.376220 kernel: watchdog: Hard watchdog permanently disabled Jan 30 13:24:46.376227 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:24:46.376234 kernel: Segment Routing with IPv6 Jan 30 13:24:46.376241 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:24:46.376248 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:24:46.376255 kernel: Key type dns_resolver registered Jan 30 13:24:46.376262 kernel: registered taskstats version 1 Jan 30 13:24:46.376269 kernel: Loading compiled-in X.509 certificates Jan 30 13:24:46.376278 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 30 13:24:46.376285 kernel: Key type .fscrypt registered Jan 30 13:24:46.376292 kernel: Key type fscrypt-provisioning registered Jan 30 13:24:46.376299 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:24:46.376306 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:24:46.376313 kernel: ima: No architecture policies found Jan 30 13:24:46.376320 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 13:24:46.376327 kernel: clk: Disabling unused clocks Jan 30 13:24:46.376334 kernel: Freeing unused kernel memory: 39936K Jan 30 13:24:46.376343 kernel: Run /init as init process Jan 30 13:24:46.376350 kernel: with arguments: Jan 30 13:24:46.376356 kernel: /init Jan 30 13:24:46.376364 kernel: with environment: Jan 30 13:24:46.376370 kernel: HOME=/ Jan 30 13:24:46.376389 kernel: TERM=linux Jan 30 13:24:46.376397 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:24:46.376407 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:24:46.376418 systemd[1]: Detected virtualization microsoft. Jan 30 13:24:46.376425 systemd[1]: Detected architecture arm64. Jan 30 13:24:46.376433 systemd[1]: Running in initrd. Jan 30 13:24:46.376440 systemd[1]: No hostname configured, using default hostname. Jan 30 13:24:46.376447 systemd[1]: Hostname set to . Jan 30 13:24:46.376455 systemd[1]: Initializing machine ID from random generator. Jan 30 13:24:46.376462 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:24:46.376470 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:24:46.376479 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:24:46.376488 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:24:46.376495 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:24:46.376503 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:24:46.376511 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:24:46.376520 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:24:46.376529 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:24:46.376537 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:24:46.376545 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:24:46.376553 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:24:46.376560 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:24:46.376568 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:24:46.376575 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:24:46.376583 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:24:46.376590 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:24:46.376599 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:24:46.376607 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:24:46.376615 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:24:46.376623 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:24:46.376631 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:24:46.376639 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:24:46.376646 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:24:46.376654 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:24:46.376663 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:24:46.376671 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:24:46.376679 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:24:46.376686 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:24:46.376709 systemd-journald[218]: Collecting audit messages is disabled. Jan 30 13:24:46.376729 systemd-journald[218]: Journal started Jan 30 13:24:46.376753 systemd-journald[218]: Runtime Journal (/run/log/journal/2468697717f64c30af0ddc1761fd502a) is 8.0M, max 78.5M, 70.5M free. Jan 30 13:24:46.382072 systemd-modules-load[219]: Inserted module 'overlay' Jan 30 13:24:46.419343 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:24:46.419369 kernel: Bridge firewalling registered Jan 30 13:24:46.419403 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:24:46.406369 systemd-modules-load[219]: Inserted module 'br_netfilter' Jan 30 13:24:46.443970 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:24:46.445713 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:24:46.453564 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:24:46.470425 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:24:46.484641 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:24:46.492936 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:24:46.519747 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:24:46.537591 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:24:46.552210 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:24:46.578622 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:24:46.594660 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:24:46.603915 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:24:46.618511 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:24:46.632543 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:24:46.659625 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:24:46.677946 dracut-cmdline[250]: dracut-dracut-053 Jan 30 13:24:46.678062 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:24:46.698588 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:24:46.736530 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:24:46.765459 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:24:46.775002 systemd-resolved[260]: Positive Trust Anchors: Jan 30 13:24:46.775013 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:24:46.775044 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:24:46.777247 systemd-resolved[260]: Defaulting to hostname 'linux'. Jan 30 13:24:46.778651 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:24:46.793686 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:24:46.883398 kernel: SCSI subsystem initialized Jan 30 13:24:46.891399 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:24:46.902412 kernel: iscsi: registered transport (tcp) Jan 30 13:24:46.921775 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:24:46.921820 kernel: QLogic iSCSI HBA Driver Jan 30 13:24:46.960648 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:24:46.980647 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:24:47.016041 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:24:47.016086 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:24:47.023811 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:24:47.073418 kernel: raid6: neonx8 gen() 15769 MB/s Jan 30 13:24:47.094396 kernel: raid6: neonx4 gen() 15817 MB/s Jan 30 13:24:47.115388 kernel: raid6: neonx2 gen() 13209 MB/s Jan 30 13:24:47.137391 kernel: raid6: neonx1 gen() 10426 MB/s Jan 30 13:24:47.157388 kernel: raid6: int64x8 gen() 6791 MB/s Jan 30 13:24:47.178399 kernel: raid6: int64x4 gen() 7357 MB/s Jan 30 13:24:47.200391 kernel: raid6: int64x2 gen() 6109 MB/s Jan 30 13:24:47.225256 kernel: raid6: int64x1 gen() 5059 MB/s Jan 30 13:24:47.225267 kernel: raid6: using algorithm neonx4 gen() 15817 MB/s Jan 30 13:24:47.252990 kernel: raid6: .... xor() 12355 MB/s, rmw enabled Jan 30 13:24:47.253002 kernel: raid6: using neon recovery algorithm Jan 30 13:24:47.265260 kernel: xor: measuring software checksum speed Jan 30 13:24:47.265275 kernel: 8regs : 21584 MB/sec Jan 30 13:24:47.269650 kernel: 32regs : 21641 MB/sec Jan 30 13:24:47.273845 kernel: arm64_neon : 27860 MB/sec Jan 30 13:24:47.278843 kernel: xor: using function: arm64_neon (27860 MB/sec) Jan 30 13:24:47.329401 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:24:47.338697 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:24:47.355507 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:24:47.380053 systemd-udevd[438]: Using default interface naming scheme 'v255'. Jan 30 13:24:47.386304 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:24:47.409669 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:24:47.424989 dracut-pre-trigger[446]: rd.md=0: removing MD RAID activation Jan 30 13:24:47.451283 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:24:47.467627 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:24:47.507853 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:24:47.532588 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:24:47.552672 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:24:47.573049 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:24:47.595607 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:24:47.612400 kernel: hv_vmbus: Vmbus version:5.3 Jan 30 13:24:47.613391 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:24:47.633729 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 13:24:47.652395 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 30 13:24:47.652447 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 13:24:47.658652 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:24:47.658670 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 13:24:47.658412 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:24:47.682757 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:24:47.682780 kernel: PTP clock support registered Jan 30 13:24:47.698405 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 13:24:47.692582 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:24:47.743477 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 30 13:24:47.743506 kernel: hv_vmbus: registering driver hv_utils Jan 30 13:24:47.743517 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 13:24:47.743526 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 13:24:47.743544 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 13:24:47.743553 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 13:24:47.692749 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:24:47.730121 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:24:47.962289 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 13:24:47.962320 kernel: scsi host0: storvsc_host_t Jan 30 13:24:47.962549 kernel: scsi host1: storvsc_host_t Jan 30 13:24:47.918956 systemd-resolved[260]: Clock change detected. Flushing caches. Jan 30 13:24:47.999015 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 13:24:47.999194 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 13:24:47.923280 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:24:47.923509 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:24:47.940361 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:24:48.057167 kernel: hv_netvsc 0022487b-0ab0-0022-487b-0ab00022487b eth0: VF slot 1 added Jan 30 13:24:47.963271 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:24:48.088318 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 30 13:24:48.109583 kernel: hv_vmbus: registering driver hv_pci Jan 30 13:24:48.109599 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:24:48.109609 kernel: hv_pci 8138cb8e-bc32-4a28-b6dd-bdc72ff90e7b: PCI VMBus probing: Using version 0x10004 Jan 30 13:24:48.291306 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 30 13:24:48.291443 kernel: hv_pci 8138cb8e-bc32-4a28-b6dd-bdc72ff90e7b: PCI host bridge to bus bc32:00 Jan 30 13:24:48.291526 kernel: pci_bus bc32:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 30 13:24:48.291662 kernel: pci_bus bc32:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 13:24:48.291752 kernel: pci bc32:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 30 13:24:48.291847 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 13:24:48.291989 kernel: pci bc32:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 13:24:48.292096 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:24:48.292181 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:24:48.292262 kernel: pci bc32:00:02.0: enabling Extended Tags Jan 30 13:24:48.292368 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 13:24:48.292475 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 13:24:48.292566 kernel: pci bc32:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at bc32:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 30 13:24:48.292664 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:24:48.292676 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:24:48.292765 kernel: pci_bus bc32:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 13:24:48.292843 kernel: pci bc32:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 13:24:47.992825 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:24:48.000538 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:24:48.009786 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:24:48.010076 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:24:48.021773 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:24:48.056319 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:24:48.090275 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:24:48.132371 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:24:48.217567 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:24:48.380870 kernel: mlx5_core bc32:00:02.0: enabling device (0000 -> 0002) Jan 30 13:24:48.676014 kernel: mlx5_core bc32:00:02.0: firmware version: 16.30.1284 Jan 30 13:24:48.676182 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (500) Jan 30 13:24:48.676195 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (504) Jan 30 13:24:48.676205 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:24:48.676221 kernel: hv_netvsc 0022487b-0ab0-0022-487b-0ab00022487b eth0: VF registering: eth1 Jan 30 13:24:48.676315 kernel: mlx5_core bc32:00:02.0 eth1: joined to eth0 Jan 30 13:24:48.676411 kernel: mlx5_core bc32:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 30 13:24:48.469580 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 13:24:48.528834 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 13:24:48.542389 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:24:48.559576 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 13:24:48.566944 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 13:24:48.736486 kernel: mlx5_core bc32:00:02.0 enP48178s1: renamed from eth1 Jan 30 13:24:48.588094 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:24:49.632645 disk-uuid[602]: The operation has completed successfully. Jan 30 13:24:49.638026 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:24:49.707826 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:24:49.709947 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:24:49.731112 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:24:49.745689 sh[690]: Success Jan 30 13:24:49.766962 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 13:24:49.843329 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:24:49.853045 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:24:49.862864 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:24:49.905096 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 30 13:24:49.905150 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:24:49.912491 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:24:49.919170 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:24:49.924663 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:24:49.999677 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:24:50.005594 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:24:50.027179 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:24:50.062254 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:24:50.062317 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:24:50.068836 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:24:50.063080 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:24:50.100570 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:24:50.108606 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:24:50.124998 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:24:50.134956 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:24:50.153154 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:24:50.161657 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:24:50.183172 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:24:50.216136 systemd-networkd[874]: lo: Link UP Jan 30 13:24:50.216147 systemd-networkd[874]: lo: Gained carrier Jan 30 13:24:50.218263 systemd-networkd[874]: Enumeration completed Jan 30 13:24:50.218371 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:24:50.223974 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:24:50.223978 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:24:50.233152 systemd[1]: Reached target network.target - Network. Jan 30 13:24:50.297938 kernel: mlx5_core bc32:00:02.0 enP48178s1: Link up Jan 30 13:24:50.340024 kernel: hv_netvsc 0022487b-0ab0-0022-487b-0ab00022487b eth0: Data path switched to VF: enP48178s1 Jan 30 13:24:50.340502 systemd-networkd[874]: enP48178s1: Link UP Jan 30 13:24:50.340776 systemd-networkd[874]: eth0: Link UP Jan 30 13:24:50.341188 systemd-networkd[874]: eth0: Gained carrier Jan 30 13:24:50.341198 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:24:50.352170 systemd-networkd[874]: enP48178s1: Gained carrier Jan 30 13:24:50.380008 systemd-networkd[874]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 13:24:50.417808 ignition[872]: Ignition 2.20.0 Jan 30 13:24:50.417820 ignition[872]: Stage: fetch-offline Jan 30 13:24:50.417857 ignition[872]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:24:50.425774 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:24:50.417866 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:24:50.417989 ignition[872]: parsed url from cmdline: "" Jan 30 13:24:50.417992 ignition[872]: no config URL provided Jan 30 13:24:50.417997 ignition[872]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:24:50.455096 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:24:50.418005 ignition[872]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:24:50.418010 ignition[872]: failed to fetch config: resource requires networking Jan 30 13:24:50.418187 ignition[872]: Ignition finished successfully Jan 30 13:24:50.473523 ignition[883]: Ignition 2.20.0 Jan 30 13:24:50.473531 ignition[883]: Stage: fetch Jan 30 13:24:50.473734 ignition[883]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:24:50.473744 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:24:50.473852 ignition[883]: parsed url from cmdline: "" Jan 30 13:24:50.473856 ignition[883]: no config URL provided Jan 30 13:24:50.473861 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:24:50.473870 ignition[883]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:24:50.473899 ignition[883]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 13:24:50.567946 ignition[883]: GET result: OK Jan 30 13:24:50.568052 ignition[883]: config has been read from IMDS userdata Jan 30 13:24:50.568104 ignition[883]: parsing config with SHA512: 15563e9d77acfff87bb9e33092d78b3e3666b8f75b736000875c676ed152ec4d02b35f75e89cf5e0f180f9f8d7400a0d476dedb62c37c8e98b99d47da9da3d8d Jan 30 13:24:50.572807 unknown[883]: fetched base config from "system" Jan 30 13:24:50.573254 ignition[883]: fetch: fetch complete Jan 30 13:24:50.572814 unknown[883]: fetched base config from "system" Jan 30 13:24:50.573259 ignition[883]: fetch: fetch passed Jan 30 13:24:50.572819 unknown[883]: fetched user config from "azure" Jan 30 13:24:50.573301 ignition[883]: Ignition finished successfully Jan 30 13:24:50.578504 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:24:50.600579 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:24:50.635249 ignition[889]: Ignition 2.20.0 Jan 30 13:24:50.635258 ignition[889]: Stage: kargs Jan 30 13:24:50.640998 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:24:50.635421 ignition[889]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:24:50.635430 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:24:50.636316 ignition[889]: kargs: kargs passed Jan 30 13:24:50.667175 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:24:50.636357 ignition[889]: Ignition finished successfully Jan 30 13:24:50.691230 ignition[896]: Ignition 2.20.0 Jan 30 13:24:50.691242 ignition[896]: Stage: disks Jan 30 13:24:50.695717 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:24:50.691407 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:24:50.705235 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:24:50.691416 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:24:50.716637 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:24:50.692363 ignition[896]: disks: disks passed Jan 30 13:24:50.728597 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:24:50.692408 ignition[896]: Ignition finished successfully Jan 30 13:24:50.740858 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:24:50.753700 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:24:50.781172 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:24:50.813538 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 13:24:50.822411 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:24:50.842118 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:24:50.901940 kernel: EXT4-fs (sda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 30 13:24:50.903252 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:24:50.908327 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:24:50.939006 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:24:50.950038 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:24:50.965436 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:24:50.979990 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:24:50.985004 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:24:51.008596 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:24:51.030433 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (915) Jan 30 13:24:51.030467 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:24:51.042561 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:24:51.047256 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:24:51.065590 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:24:51.059882 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:24:51.073619 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:24:51.164998 coreos-metadata[917]: Jan 30 13:24:51.164 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:24:51.178262 coreos-metadata[917]: Jan 30 13:24:51.178 INFO Fetch successful Jan 30 13:24:51.184522 coreos-metadata[917]: Jan 30 13:24:51.178 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:24:51.196553 coreos-metadata[917]: Jan 30 13:24:51.192 INFO Fetch successful Jan 30 13:24:51.196553 coreos-metadata[917]: Jan 30 13:24:51.196 INFO wrote hostname ci-4186.1.0-a-a27a4db638 to /sysroot/etc/hostname Jan 30 13:24:51.202962 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:24:51.280308 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:24:51.300505 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:24:51.309978 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:24:51.318990 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:24:51.597856 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:24:51.616077 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:24:51.623634 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:24:51.644483 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:24:51.657474 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:24:51.658255 systemd-networkd[874]: enP48178s1: Gained IPv6LL Jan 30 13:24:51.682544 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:24:51.695862 ignition[1034]: INFO : Ignition 2.20.0 Jan 30 13:24:51.695862 ignition[1034]: INFO : Stage: mount Jan 30 13:24:51.695862 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:24:51.695862 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:24:51.695862 ignition[1034]: INFO : mount: mount passed Jan 30 13:24:51.695862 ignition[1034]: INFO : Ignition finished successfully Jan 30 13:24:51.696411 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:24:51.716087 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:24:51.913077 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:24:51.935112 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1045) Jan 30 13:24:51.948212 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:24:51.948270 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:24:51.952602 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:24:51.961007 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:24:51.962347 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:24:51.992470 ignition[1062]: INFO : Ignition 2.20.0 Jan 30 13:24:51.992470 ignition[1062]: INFO : Stage: files Jan 30 13:24:51.992470 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:24:51.992470 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:24:51.992470 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:24:52.020446 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:24:52.020446 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:24:52.020446 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:24:52.020446 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:24:52.053369 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:24:52.053369 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:24:52.053369 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 13:24:52.028204 unknown[1062]: wrote ssh authorized keys file for user: core Jan 30 13:24:52.101684 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:24:52.197230 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:24:52.197230 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:24:52.218605 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 30 13:24:52.362035 systemd-networkd[874]: eth0: Gained IPv6LL Jan 30 13:24:52.682219 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:24:52.750753 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:24:52.879900 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:24:52.879900 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:24:52.879900 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 13:24:53.166641 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:24:53.350119 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:24:53.350119 ignition[1062]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:24:53.371182 ignition[1062]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:24:53.371182 ignition[1062]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:24:53.371182 ignition[1062]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:24:53.371182 ignition[1062]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:24:53.371182 ignition[1062]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:24:53.371182 ignition[1062]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:24:53.371182 ignition[1062]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:24:53.371182 ignition[1062]: INFO : files: files passed Jan 30 13:24:53.371182 ignition[1062]: INFO : Ignition finished successfully Jan 30 13:24:53.371017 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:24:53.401577 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:24:53.425087 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:24:53.464766 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:24:53.500680 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:24:53.500680 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:24:53.464872 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:24:53.537882 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:24:53.499070 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:24:53.507887 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:24:53.547193 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:24:53.597457 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:24:53.597581 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:24:53.611448 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:24:53.624204 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:24:53.635760 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:24:53.651173 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:24:53.674625 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:24:53.694168 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:24:53.713745 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:24:53.713873 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:24:53.728368 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:24:53.741663 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:24:53.754779 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:24:53.766012 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:24:53.766084 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:24:53.782857 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:24:53.789165 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:24:53.801001 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:24:53.813691 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:24:53.828917 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:24:53.841471 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:24:53.854442 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:24:53.867979 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:24:53.879228 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:24:53.892119 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:24:53.904971 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:24:53.905055 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:24:53.920956 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:24:53.933006 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:24:53.945709 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:24:53.945764 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:24:53.958218 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:24:53.958291 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:24:53.976592 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:24:53.976649 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:24:53.983972 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:24:53.984018 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:24:53.997986 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:24:54.082014 ignition[1116]: INFO : Ignition 2.20.0 Jan 30 13:24:54.082014 ignition[1116]: INFO : Stage: umount Jan 30 13:24:54.082014 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:24:54.082014 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:24:54.082014 ignition[1116]: INFO : umount: umount passed Jan 30 13:24:54.082014 ignition[1116]: INFO : Ignition finished successfully Jan 30 13:24:53.998042 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:24:54.032154 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:24:54.044670 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:24:54.057982 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:24:54.058063 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:24:54.074710 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:24:54.074774 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:24:54.087759 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:24:54.092337 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:24:54.104990 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:24:54.105309 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:24:54.105348 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:24:54.118662 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:24:54.118728 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:24:54.129071 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:24:54.129121 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:24:54.140009 systemd[1]: Stopped target network.target - Network. Jan 30 13:24:54.154963 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:24:54.155031 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:24:54.167322 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:24:54.180053 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:24:54.191956 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:24:54.199850 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:24:54.211096 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:24:54.228199 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:24:54.228252 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:24:54.238854 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:24:54.238899 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:24:54.250899 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:24:54.250975 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:24:54.265148 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:24:54.265200 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:24:54.277179 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:24:54.290994 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:24:54.319094 systemd-networkd[874]: eth0: DHCPv6 lease lost Jan 30 13:24:54.532327 kernel: hv_netvsc 0022487b-0ab0-0022-487b-0ab00022487b eth0: Data path switched from VF: enP48178s1 Jan 30 13:24:54.320260 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:24:54.320417 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:24:54.339175 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:24:54.339314 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:24:54.350675 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:24:54.350737 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:24:54.382034 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:24:54.392700 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:24:54.392772 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:24:54.407629 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:24:54.407687 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:24:54.418988 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:24:54.419040 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:24:54.430462 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:24:54.430508 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:24:54.443856 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:24:54.469608 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:24:54.469773 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:24:54.477494 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:24:54.477548 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:24:54.490526 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:24:54.490573 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:24:54.504616 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:24:54.504666 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:24:54.526558 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:24:54.526629 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:24:54.543630 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:24:54.543699 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:24:54.823494 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jan 30 13:24:54.582159 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:24:54.599067 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:24:54.599144 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:24:54.612262 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:24:54.612321 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:24:54.628572 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:24:54.628623 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:24:54.644614 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:24:54.644662 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:24:54.659502 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:24:54.659601 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:24:54.671579 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:24:54.671675 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:24:54.685185 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:24:54.687350 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:24:54.699623 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:24:54.713244 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:24:54.713341 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:24:54.743151 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:24:54.764040 systemd[1]: Switching root. Jan 30 13:24:54.965630 systemd-journald[218]: Journal stopped Jan 30 13:24:57.282780 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:24:57.282804 kernel: SELinux: policy capability open_perms=1 Jan 30 13:24:57.282817 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:24:57.282825 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:24:57.282835 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:24:57.282843 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:24:57.282852 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:24:57.282860 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:24:57.282869 kernel: audit: type=1403 audit(1738243495.372:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:24:57.282879 systemd[1]: Successfully loaded SELinux policy in 89.911ms. Jan 30 13:24:57.282890 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.156ms. Jan 30 13:24:57.282900 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:24:57.282923 systemd[1]: Detected virtualization microsoft. Jan 30 13:24:57.282953 systemd[1]: Detected architecture arm64. Jan 30 13:24:57.282963 systemd[1]: Detected first boot. Jan 30 13:24:57.282975 systemd[1]: Hostname set to . Jan 30 13:24:57.282985 systemd[1]: Initializing machine ID from random generator. Jan 30 13:24:57.282994 zram_generator::config[1158]: No configuration found. Jan 30 13:24:57.283004 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:24:57.283013 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:24:57.283022 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:24:57.283031 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:24:57.283042 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:24:57.283053 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:24:57.283063 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:24:57.283072 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:24:57.283082 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:24:57.283091 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:24:57.283101 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:24:57.283112 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:24:57.283121 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:24:57.283135 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:24:57.283145 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:24:57.283154 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:24:57.283164 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:24:57.283173 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:24:57.283183 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 13:24:57.283194 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:24:57.283203 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:24:57.283212 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:24:57.283225 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:24:57.283235 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:24:57.283244 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:24:57.283254 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:24:57.283264 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:24:57.283276 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:24:57.283285 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:24:57.283295 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:24:57.283304 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:24:57.283314 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:24:57.283324 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:24:57.283335 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:24:57.283345 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:24:57.283354 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:24:57.283364 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:24:57.283373 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:24:57.283383 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:24:57.283393 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:24:57.283404 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:24:57.283414 systemd[1]: Reached target machines.target - Containers. Jan 30 13:24:57.283424 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:24:57.283434 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:24:57.283444 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:24:57.283454 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:24:57.283463 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:24:57.283474 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:24:57.283485 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:24:57.283495 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:24:57.283505 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:24:57.283515 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:24:57.283524 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:24:57.283534 kernel: fuse: init (API version 7.39) Jan 30 13:24:57.283543 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:24:57.283553 kernel: ACPI: bus type drm_connector registered Jan 30 13:24:57.283562 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:24:57.283573 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:24:57.283583 kernel: loop: module loaded Jan 30 13:24:57.283592 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:24:57.283602 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:24:57.283628 systemd-journald[1261]: Collecting audit messages is disabled. Jan 30 13:24:57.283651 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:24:57.283661 systemd-journald[1261]: Journal started Jan 30 13:24:57.283685 systemd-journald[1261]: Runtime Journal (/run/log/journal/a5d44154a45143038ffdc7c6b866f9ac) is 8.0M, max 78.5M, 70.5M free. Jan 30 13:24:56.380787 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:24:56.425125 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 13:24:56.425476 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:24:56.425776 systemd[1]: systemd-journald.service: Consumed 3.536s CPU time. Jan 30 13:24:57.316608 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:24:57.333619 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:24:57.344939 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:24:57.345004 systemd[1]: Stopped verity-setup.service. Jan 30 13:24:57.362599 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:24:57.363438 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:24:57.369682 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:24:57.376323 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:24:57.383036 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:24:57.390412 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:24:57.398052 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:24:57.403764 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:24:57.411085 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:24:57.419115 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:24:57.419266 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:24:57.426394 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:24:57.426521 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:24:57.433982 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:24:57.434113 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:24:57.440961 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:24:57.441086 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:24:57.450190 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:24:57.450334 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:24:57.458233 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:24:57.458364 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:24:57.465508 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:24:57.472398 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:24:57.479863 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:24:57.487721 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:24:57.506029 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:24:57.517002 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:24:57.526074 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:24:57.532346 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:24:57.532388 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:24:57.539135 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:24:57.547528 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:24:57.555049 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:24:57.561536 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:24:57.566208 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:24:57.574881 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:24:57.582855 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:24:57.586103 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:24:57.593622 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:24:57.596094 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:24:57.604097 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:24:57.611217 systemd-journald[1261]: Time spent on flushing to /var/log/journal/a5d44154a45143038ffdc7c6b866f9ac is 14.685ms for 904 entries. Jan 30 13:24:57.611217 systemd-journald[1261]: System Journal (/var/log/journal/a5d44154a45143038ffdc7c6b866f9ac) is 8.0M, max 2.6G, 2.6G free. Jan 30 13:24:57.654154 systemd-journald[1261]: Received client request to flush runtime journal. Jan 30 13:24:57.654201 kernel: loop0: detected capacity change from 0 to 28752 Jan 30 13:24:57.622106 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:24:57.640102 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:24:57.660324 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:24:57.669441 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:24:57.682680 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:24:57.695597 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:24:57.707772 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:24:57.713507 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Jan 30 13:24:57.713523 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Jan 30 13:24:57.717581 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:24:57.724843 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:24:57.739789 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:24:57.752233 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:24:57.761183 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:24:57.780651 udevadm[1295]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:24:57.786316 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:24:57.812946 kernel: loop1: detected capacity change from 0 to 194096 Jan 30 13:24:57.819182 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:24:57.821179 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:24:57.869038 kernel: loop2: detected capacity change from 0 to 116784 Jan 30 13:24:57.909516 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:24:57.924163 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:24:57.959724 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Jan 30 13:24:57.959745 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Jan 30 13:24:57.963927 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:24:58.005975 kernel: loop3: detected capacity change from 0 to 113552 Jan 30 13:24:58.113952 kernel: loop4: detected capacity change from 0 to 28752 Jan 30 13:24:58.127957 kernel: loop5: detected capacity change from 0 to 194096 Jan 30 13:24:58.152953 kernel: loop6: detected capacity change from 0 to 116784 Jan 30 13:24:58.172986 kernel: loop7: detected capacity change from 0 to 113552 Jan 30 13:24:58.178903 (sd-merge)[1320]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 30 13:24:58.179357 (sd-merge)[1320]: Merged extensions into '/usr'. Jan 30 13:24:58.184264 systemd[1]: Reloading requested from client PID 1292 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:24:58.184279 systemd[1]: Reloading... Jan 30 13:24:58.286951 zram_generator::config[1349]: No configuration found. Jan 30 13:24:58.415542 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:24:58.472229 systemd[1]: Reloading finished in 287 ms. Jan 30 13:24:58.501933 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:24:58.510107 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:24:58.527113 systemd[1]: Starting ensure-sysext.service... Jan 30 13:24:58.533168 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:24:58.542121 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:24:58.553127 systemd[1]: Reloading requested from client PID 1402 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:24:58.553146 systemd[1]: Reloading... Jan 30 13:24:58.571703 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:24:58.571958 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:24:58.572591 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:24:58.572812 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Jan 30 13:24:58.572863 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Jan 30 13:24:58.581294 systemd-udevd[1404]: Using default interface naming scheme 'v255'. Jan 30 13:24:58.588249 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:24:58.588264 systemd-tmpfiles[1403]: Skipping /boot Jan 30 13:24:58.601944 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:24:58.601959 systemd-tmpfiles[1403]: Skipping /boot Jan 30 13:24:58.630527 zram_generator::config[1429]: No configuration found. Jan 30 13:24:58.798218 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:24:58.881491 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 13:24:58.881617 systemd[1]: Reloading finished in 327 ms. Jan 30 13:24:58.901403 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:24:58.915068 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:24:58.915154 kernel: hv_vmbus: registering driver hv_balloon Jan 30 13:24:58.917888 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:24:58.935948 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 30 13:24:58.936046 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 30 13:24:58.946091 kernel: hv_vmbus: registering driver hyperv_fb Jan 30 13:24:58.958451 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 30 13:24:58.958539 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 30 13:24:58.954109 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:24:58.967311 kernel: Console: switching to colour dummy device 80x25 Jan 30 13:24:58.983014 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:24:58.972989 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:24:58.995870 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:24:59.007187 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:24:59.017113 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:24:59.032105 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:24:59.071906 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:24:59.089049 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1466) Jan 30 13:24:59.089125 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:24:59.109527 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Jan 30 13:24:59.116424 systemd[1]: Finished ensure-sysext.service. Jan 30 13:24:59.143345 augenrules[1587]: No rules Jan 30 13:24:59.146098 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:24:59.147986 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:24:59.162579 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:24:59.172719 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:24:59.194892 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:24:59.203235 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:24:59.213580 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:24:59.219560 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:24:59.219772 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:24:59.229231 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:24:59.247182 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:24:59.255782 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:24:59.257965 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:24:59.265091 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:24:59.265236 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:24:59.271951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:24:59.272083 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:24:59.279596 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:24:59.286595 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:24:59.286770 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:24:59.326892 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:24:59.336949 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:24:59.351557 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:24:59.366212 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:24:59.380274 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:24:59.380365 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:24:59.382062 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:24:59.394957 lvm[1640]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:24:59.396761 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:24:59.408564 systemd-networkd[1537]: lo: Link UP Jan 30 13:24:59.408573 systemd-networkd[1537]: lo: Gained carrier Jan 30 13:24:59.414756 systemd-networkd[1537]: Enumeration completed Jan 30 13:24:59.415115 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:24:59.417355 systemd-networkd[1537]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:24:59.417360 systemd-networkd[1537]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:24:59.420346 systemd-resolved[1538]: Positive Trust Anchors: Jan 30 13:24:59.420358 systemd-resolved[1538]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:24:59.420389 systemd-resolved[1538]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:24:59.425567 systemd-resolved[1538]: Using system hostname 'ci-4186.1.0-a-a27a4db638'. Jan 30 13:24:59.429205 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:24:59.436963 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:24:59.446112 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:24:59.459090 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:24:59.470303 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:24:59.475175 lvm[1647]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:24:59.489596 kernel: mlx5_core bc32:00:02.0 enP48178s1: Link up Jan 30 13:24:59.505652 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:24:59.521951 kernel: hv_netvsc 0022487b-0ab0-0022-487b-0ab00022487b eth0: Data path switched to VF: enP48178s1 Jan 30 13:24:59.522483 systemd-networkd[1537]: enP48178s1: Link UP Jan 30 13:24:59.522692 systemd-networkd[1537]: eth0: Link UP Jan 30 13:24:59.522696 systemd-networkd[1537]: eth0: Gained carrier Jan 30 13:24:59.522711 systemd-networkd[1537]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:24:59.525262 systemd-networkd[1537]: enP48178s1: Gained carrier Jan 30 13:24:59.526762 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:24:59.534252 systemd[1]: Reached target network.target - Network. Jan 30 13:24:59.540142 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:24:59.553039 systemd-networkd[1537]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 13:24:59.576181 ldconfig[1287]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:24:59.583669 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:24:59.592552 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:24:59.604094 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:24:59.617521 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:24:59.624224 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:24:59.630416 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:24:59.637692 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:24:59.645264 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:24:59.651354 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:24:59.658591 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:24:59.665904 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:24:59.665954 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:24:59.671230 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:24:59.677846 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:24:59.686305 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:24:59.695882 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:24:59.702564 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:24:59.709038 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:24:59.715157 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:24:59.722591 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:24:59.722727 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:24:59.729034 systemd[1]: Starting chronyd.service - NTP client/server... Jan 30 13:24:59.739079 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:24:59.751131 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:24:59.759409 (chronyd)[1659]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 30 13:24:59.760094 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:24:59.766839 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:24:59.771532 chronyd[1668]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 30 13:24:59.778170 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:24:59.784411 chronyd[1668]: Timezone right/UTC failed leap second check, ignoring Jan 30 13:24:59.784666 chronyd[1668]: Loaded seccomp filter (level 2) Jan 30 13:24:59.788136 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:24:59.788180 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 30 13:24:59.789690 jq[1666]: false Jan 30 13:24:59.797193 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 30 13:24:59.800646 KVP[1671]: KVP starting; pid is:1671 Jan 30 13:24:59.805358 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 30 13:24:59.806978 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:24:59.817061 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:24:59.821166 extend-filesystems[1669]: Found loop4 Jan 30 13:24:59.821166 extend-filesystems[1669]: Found loop5 Jan 30 13:24:59.821166 extend-filesystems[1669]: Found loop6 Jan 30 13:24:59.841421 kernel: hv_utils: KVP IC version 4.0 Jan 30 13:24:59.827147 dbus-daemon[1662]: [system] SELinux support is enabled Jan 30 13:24:59.844936 extend-filesystems[1669]: Found loop7 Jan 30 13:24:59.844936 extend-filesystems[1669]: Found sda Jan 30 13:24:59.844936 extend-filesystems[1669]: Found sda1 Jan 30 13:24:59.844936 extend-filesystems[1669]: Found sda2 Jan 30 13:24:59.844936 extend-filesystems[1669]: Found sda3 Jan 30 13:24:59.844936 extend-filesystems[1669]: Found usr Jan 30 13:24:59.844936 extend-filesystems[1669]: Found sda4 Jan 30 13:24:59.844936 extend-filesystems[1669]: Found sda6 Jan 30 13:24:59.844936 extend-filesystems[1669]: Found sda7 Jan 30 13:24:59.844936 extend-filesystems[1669]: Found sda9 Jan 30 13:24:59.844936 extend-filesystems[1669]: Checking size of /dev/sda9 Jan 30 13:24:59.843194 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:24:59.828689 KVP[1671]: KVP LIC Version: 3.1 Jan 30 13:25:00.056144 coreos-metadata[1661]: Jan 30 13:24:59.896 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:25:00.056144 coreos-metadata[1661]: Jan 30 13:24:59.904 INFO Fetch successful Jan 30 13:25:00.056144 coreos-metadata[1661]: Jan 30 13:24:59.905 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 30 13:25:00.056144 coreos-metadata[1661]: Jan 30 13:24:59.917 INFO Fetch successful Jan 30 13:25:00.056144 coreos-metadata[1661]: Jan 30 13:24:59.918 INFO Fetching http://168.63.129.16/machine/bfcdbf0b-e486-4506-a9a5-c59054407e57/cb4a45b1%2D756f%2D446a%2Da6ee%2Dbefe38edc93f.%5Fci%2D4186.1.0%2Da%2Da27a4db638?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 30 13:25:00.056144 coreos-metadata[1661]: Jan 30 13:24:59.922 INFO Fetch successful Jan 30 13:25:00.056144 coreos-metadata[1661]: Jan 30 13:24:59.923 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:25:00.056144 coreos-metadata[1661]: Jan 30 13:24:59.948 INFO Fetch successful Jan 30 13:25:00.056411 extend-filesystems[1669]: Old size kept for /dev/sda9 Jan 30 13:25:00.056411 extend-filesystems[1669]: Found sr0 Jan 30 13:25:00.087880 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1466) Jan 30 13:24:59.868257 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:24:59.894464 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:24:59.909185 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:24:59.909693 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:24:59.917786 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:25:00.088455 update_engine[1699]: I20250130 13:24:59.966965 1699 main.cc:92] Flatcar Update Engine starting Jan 30 13:25:00.088455 update_engine[1699]: I20250130 13:24:59.971228 1699 update_check_scheduler.cc:74] Next update check in 7m54s Jan 30 13:24:59.932083 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:25:00.094305 jq[1701]: true Jan 30 13:24:59.948734 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:24:59.961807 systemd[1]: Started chronyd.service - NTP client/server. Jan 30 13:24:59.984283 systemd-logind[1690]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 30 13:24:59.984490 systemd-logind[1690]: New seat seat0. Jan 30 13:24:59.990282 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:25:00.014615 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:25:00.015568 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:25:00.015851 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:25:00.017001 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:25:00.040584 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:25:00.045350 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:25:00.068297 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:25:00.069532 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:25:00.102691 jq[1748]: true Jan 30 13:25:00.104228 (ntainerd)[1750]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:25:00.112270 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:25:00.131464 dbus-daemon[1662]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 13:25:00.135585 tar[1740]: linux-arm64/helm Jan 30 13:25:00.148812 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:25:00.158095 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:25:00.158281 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:25:00.158405 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:25:00.166100 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:25:00.166211 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:25:00.182287 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:25:00.238788 locksmithd[1780]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:25:00.282492 bash[1779]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:25:00.275266 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:25:00.288550 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:25:00.442486 containerd[1750]: time="2025-01-30T13:25:00.442327380Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:25:00.517440 containerd[1750]: time="2025-01-30T13:25:00.517360580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:25:00.518759 containerd[1750]: time="2025-01-30T13:25:00.518712300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:25:00.518759 containerd[1750]: time="2025-01-30T13:25:00.518749980Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:25:00.518852 containerd[1750]: time="2025-01-30T13:25:00.518768060Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:25:00.518983 containerd[1750]: time="2025-01-30T13:25:00.518961140Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:25:00.519012 containerd[1750]: time="2025-01-30T13:25:00.518986780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:25:00.519076 containerd[1750]: time="2025-01-30T13:25:00.519054940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:25:00.519076 containerd[1750]: time="2025-01-30T13:25:00.519073340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:25:00.519262 containerd[1750]: time="2025-01-30T13:25:00.519238420Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:25:00.519262 containerd[1750]: time="2025-01-30T13:25:00.519258900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:25:00.519313 containerd[1750]: time="2025-01-30T13:25:00.519272740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:25:00.519313 containerd[1750]: time="2025-01-30T13:25:00.519282540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:25:00.519378 containerd[1750]: time="2025-01-30T13:25:00.519357860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:25:00.519579 containerd[1750]: time="2025-01-30T13:25:00.519555900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:25:00.519682 containerd[1750]: time="2025-01-30T13:25:00.519661060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:25:00.519682 containerd[1750]: time="2025-01-30T13:25:00.519679620Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:25:00.519773 containerd[1750]: time="2025-01-30T13:25:00.519754140Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:25:00.519818 containerd[1750]: time="2025-01-30T13:25:00.519801700Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:25:00.540995 containerd[1750]: time="2025-01-30T13:25:00.540944660Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:25:00.541070 containerd[1750]: time="2025-01-30T13:25:00.541019620Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:25:00.541070 containerd[1750]: time="2025-01-30T13:25:00.541036260Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:25:00.541070 containerd[1750]: time="2025-01-30T13:25:00.541051540Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:25:00.541070 containerd[1750]: time="2025-01-30T13:25:00.541066620Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:25:00.541273 containerd[1750]: time="2025-01-30T13:25:00.541248460Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:25:00.542916 containerd[1750]: time="2025-01-30T13:25:00.541515820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:25:00.542916 containerd[1750]: time="2025-01-30T13:25:00.541645540Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:25:00.542916 containerd[1750]: time="2025-01-30T13:25:00.541662300Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:25:00.542916 containerd[1750]: time="2025-01-30T13:25:00.541675700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:25:00.542916 containerd[1750]: time="2025-01-30T13:25:00.541690340Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:25:00.542916 containerd[1750]: time="2025-01-30T13:25:00.541703660Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:25:00.542916 containerd[1750]: time="2025-01-30T13:25:00.541715940Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:25:00.542916 containerd[1750]: time="2025-01-30T13:25:00.541728980Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:25:00.542916 containerd[1750]: time="2025-01-30T13:25:00.541744700Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:25:00.542916 containerd[1750]: time="2025-01-30T13:25:00.541761460Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:25:00.542916 containerd[1750]: time="2025-01-30T13:25:00.541773700Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:25:00.542916 containerd[1750]: time="2025-01-30T13:25:00.541785460Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:25:00.542916 containerd[1750]: time="2025-01-30T13:25:00.541805380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:25:00.542916 containerd[1750]: time="2025-01-30T13:25:00.541818980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:25:00.543166 containerd[1750]: time="2025-01-30T13:25:00.541831100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:25:00.543166 containerd[1750]: time="2025-01-30T13:25:00.541844700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:25:00.543166 containerd[1750]: time="2025-01-30T13:25:00.541860780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:25:00.543166 containerd[1750]: time="2025-01-30T13:25:00.541873020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:25:00.543166 containerd[1750]: time="2025-01-30T13:25:00.541883580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:25:00.543166 containerd[1750]: time="2025-01-30T13:25:00.541900900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:25:00.543166 containerd[1750]: time="2025-01-30T13:25:00.542996740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:25:00.543166 containerd[1750]: time="2025-01-30T13:25:00.543022420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:25:00.543166 containerd[1750]: time="2025-01-30T13:25:00.543035140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:25:00.543166 containerd[1750]: time="2025-01-30T13:25:00.543048820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:25:00.543166 containerd[1750]: time="2025-01-30T13:25:00.543060660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:25:00.543166 containerd[1750]: time="2025-01-30T13:25:00.543075220Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:25:00.543166 containerd[1750]: time="2025-01-30T13:25:00.543100340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:25:00.543166 containerd[1750]: time="2025-01-30T13:25:00.543114740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:25:00.543166 containerd[1750]: time="2025-01-30T13:25:00.543125660Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:25:00.543430 containerd[1750]: time="2025-01-30T13:25:00.543184420Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:25:00.543430 containerd[1750]: time="2025-01-30T13:25:00.543202980Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:25:00.543430 containerd[1750]: time="2025-01-30T13:25:00.543213900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:25:00.543668 containerd[1750]: time="2025-01-30T13:25:00.543225980Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:25:00.543699 containerd[1750]: time="2025-01-30T13:25:00.543665340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:25:00.543699 containerd[1750]: time="2025-01-30T13:25:00.543686940Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:25:00.543699 containerd[1750]: time="2025-01-30T13:25:00.543697300Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:25:00.543757 containerd[1750]: time="2025-01-30T13:25:00.543707700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:25:00.544363 containerd[1750]: time="2025-01-30T13:25:00.544284660Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:25:00.544521 containerd[1750]: time="2025-01-30T13:25:00.544370540Z" level=info msg="Connect containerd service" Jan 30 13:25:00.544521 containerd[1750]: time="2025-01-30T13:25:00.544404300Z" level=info msg="using legacy CRI server" Jan 30 13:25:00.544521 containerd[1750]: time="2025-01-30T13:25:00.544411580Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:25:00.544576 containerd[1750]: time="2025-01-30T13:25:00.544532020Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:25:00.545918 containerd[1750]: time="2025-01-30T13:25:00.545881980Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:25:00.546643 containerd[1750]: time="2025-01-30T13:25:00.546604780Z" level=info msg="Start subscribing containerd event" Jan 30 13:25:00.546683 containerd[1750]: time="2025-01-30T13:25:00.546657460Z" level=info msg="Start recovering state" Jan 30 13:25:00.546747 containerd[1750]: time="2025-01-30T13:25:00.546728700Z" level=info msg="Start event monitor" Jan 30 13:25:00.546747 containerd[1750]: time="2025-01-30T13:25:00.546744220Z" level=info msg="Start snapshots syncer" Jan 30 13:25:00.546797 containerd[1750]: time="2025-01-30T13:25:00.546755660Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:25:00.546797 containerd[1750]: time="2025-01-30T13:25:00.546762900Z" level=info msg="Start streaming server" Jan 30 13:25:00.548468 containerd[1750]: time="2025-01-30T13:25:00.548443100Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:25:00.548524 containerd[1750]: time="2025-01-30T13:25:00.548507220Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:25:00.556129 containerd[1750]: time="2025-01-30T13:25:00.549095660Z" level=info msg="containerd successfully booted in 0.110790s" Jan 30 13:25:00.549185 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:25:00.555863 systemd-networkd[1537]: eth0: Gained IPv6LL Jan 30 13:25:00.566434 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:25:00.579694 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:25:00.603089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:25:00.630172 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:25:00.666360 sshd_keygen[1700]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:25:00.687770 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:25:00.703190 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:25:00.714720 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 30 13:25:00.720803 tar[1740]: linux-arm64/LICENSE Jan 30 13:25:00.720803 tar[1740]: linux-arm64/README.md Jan 30 13:25:00.726430 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:25:00.735027 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:25:00.736980 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:25:00.760629 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:25:00.768806 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:25:00.779094 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 30 13:25:00.790276 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:25:00.805361 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:25:00.817282 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 13:25:00.825754 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:25:01.194216 systemd-networkd[1537]: enP48178s1: Gained IPv6LL Jan 30 13:25:01.355126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:01.363375 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:25:01.363974 (kubelet)[1836]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:25:01.369698 systemd[1]: Startup finished in 712ms (kernel) + 9.345s (initrd) + 6.085s (userspace) = 16.142s. Jan 30 13:25:01.405123 agetty[1830]: failed to open credentials directory Jan 30 13:25:01.405165 agetty[1831]: failed to open credentials directory Jan 30 13:25:01.516066 login[1831]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 30 13:25:01.517684 login[1830]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:25:01.526400 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:25:01.532900 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:25:01.536080 systemd-logind[1690]: New session 1 of user core. Jan 30 13:25:01.547031 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:25:01.554578 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:25:01.561869 (systemd)[1850]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:25:01.665947 waagent[1826]: 2025-01-30T13:25:01.665349Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 30 13:25:01.672926 waagent[1826]: 2025-01-30T13:25:01.671802Z INFO Daemon Daemon OS: flatcar 4186.1.0 Jan 30 13:25:01.677302 waagent[1826]: 2025-01-30T13:25:01.676802Z INFO Daemon Daemon Python: 3.11.10 Jan 30 13:25:01.681774 waagent[1826]: 2025-01-30T13:25:01.681695Z INFO Daemon Daemon Run daemon Jan 30 13:25:01.686208 waagent[1826]: 2025-01-30T13:25:01.686145Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4186.1.0' Jan 30 13:25:01.697926 waagent[1826]: 2025-01-30T13:25:01.695905Z INFO Daemon Daemon Using waagent for provisioning Jan 30 13:25:01.702937 waagent[1826]: 2025-01-30T13:25:01.702215Z INFO Daemon Daemon Activate resource disk Jan 30 13:25:01.707539 waagent[1826]: 2025-01-30T13:25:01.707463Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 30 13:25:01.720695 waagent[1826]: 2025-01-30T13:25:01.720615Z INFO Daemon Daemon Found device: None Jan 30 13:25:01.725929 waagent[1826]: 2025-01-30T13:25:01.725704Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 30 13:25:01.735099 waagent[1826]: 2025-01-30T13:25:01.735021Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 30 13:25:01.750942 waagent[1826]: 2025-01-30T13:25:01.749016Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 13:25:01.754956 systemd[1850]: Queued start job for default target default.target. Jan 30 13:25:01.756053 waagent[1826]: 2025-01-30T13:25:01.755809Z INFO Daemon Daemon Running default provisioning handler Jan 30 13:25:01.764016 systemd[1850]: Created slice app.slice - User Application Slice. Jan 30 13:25:01.764047 systemd[1850]: Reached target paths.target - Paths. Jan 30 13:25:01.764060 systemd[1850]: Reached target timers.target - Timers. Jan 30 13:25:01.766632 systemd[1850]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:25:01.770850 waagent[1826]: 2025-01-30T13:25:01.770751Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 30 13:25:01.787369 waagent[1826]: 2025-01-30T13:25:01.787286Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 30 13:25:01.797487 waagent[1826]: 2025-01-30T13:25:01.797406Z INFO Daemon Daemon cloud-init is enabled: False Jan 30 13:25:01.802885 waagent[1826]: 2025-01-30T13:25:01.802814Z INFO Daemon Daemon Copying ovf-env.xml Jan 30 13:25:01.816006 systemd[1850]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:25:01.816129 systemd[1850]: Reached target sockets.target - Sockets. Jan 30 13:25:01.816142 systemd[1850]: Reached target basic.target - Basic System. Jan 30 13:25:01.816182 systemd[1850]: Reached target default.target - Main User Target. Jan 30 13:25:01.816207 systemd[1850]: Startup finished in 247ms. Jan 30 13:25:01.816316 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:25:01.822903 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:25:01.857249 waagent[1826]: 2025-01-30T13:25:01.856283Z INFO Daemon Daemon Successfully mounted dvd Jan 30 13:25:01.880782 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 30 13:25:01.884301 waagent[1826]: 2025-01-30T13:25:01.883853Z INFO Daemon Daemon Detect protocol endpoint Jan 30 13:25:01.889322 waagent[1826]: 2025-01-30T13:25:01.889154Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 13:25:01.895373 waagent[1826]: 2025-01-30T13:25:01.895299Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 30 13:25:01.903558 waagent[1826]: 2025-01-30T13:25:01.903087Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 30 13:25:01.909407 waagent[1826]: 2025-01-30T13:25:01.908882Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 30 13:25:01.914735 waagent[1826]: 2025-01-30T13:25:01.914563Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 30 13:25:01.939569 waagent[1826]: 2025-01-30T13:25:01.939216Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 30 13:25:01.946235 waagent[1826]: 2025-01-30T13:25:01.946198Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 30 13:25:01.951584 waagent[1826]: 2025-01-30T13:25:01.951532Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 30 13:25:02.087135 waagent[1826]: 2025-01-30T13:25:02.086562Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 30 13:25:02.088830 kubelet[1836]: E0130 13:25:02.088746 1836 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:25:02.093892 waagent[1826]: 2025-01-30T13:25:02.093807Z INFO Daemon Daemon Forcing an update of the goal state. Jan 30 13:25:02.099762 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:25:02.099905 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:25:02.105062 waagent[1826]: 2025-01-30T13:25:02.104474Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 13:25:02.124343 waagent[1826]: 2025-01-30T13:25:02.124287Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 30 13:25:02.130373 waagent[1826]: 2025-01-30T13:25:02.130314Z INFO Daemon Jan 30 13:25:02.133438 waagent[1826]: 2025-01-30T13:25:02.133386Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 8060e912-6486-45af-8b94-94a36f6bdae0 eTag: 7958933129871429195 source: Fabric] Jan 30 13:25:02.145256 waagent[1826]: 2025-01-30T13:25:02.145201Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 30 13:25:02.152420 waagent[1826]: 2025-01-30T13:25:02.152366Z INFO Daemon Jan 30 13:25:02.155302 waagent[1826]: 2025-01-30T13:25:02.155253Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 30 13:25:02.172643 waagent[1826]: 2025-01-30T13:25:02.172599Z INFO Daemon Daemon Downloading artifacts profile blob Jan 30 13:25:02.268941 waagent[1826]: 2025-01-30T13:25:02.268816Z INFO Daemon Downloaded certificate {'thumbprint': 'C4AA6510ACDE9F57542EEDA901E4431CBC76C329', 'hasPrivateKey': True} Jan 30 13:25:02.279359 waagent[1826]: 2025-01-30T13:25:02.279302Z INFO Daemon Downloaded certificate {'thumbprint': 'BFF5C429F15100E97EC88828383BB9FABFAAA3FE', 'hasPrivateKey': False} Jan 30 13:25:02.290002 waagent[1826]: 2025-01-30T13:25:02.289903Z INFO Daemon Fetch goal state completed Jan 30 13:25:02.308992 waagent[1826]: 2025-01-30T13:25:02.308886Z INFO Daemon Daemon Starting provisioning Jan 30 13:25:02.314316 waagent[1826]: 2025-01-30T13:25:02.314248Z INFO Daemon Daemon Handle ovf-env.xml. Jan 30 13:25:02.319047 waagent[1826]: 2025-01-30T13:25:02.318993Z INFO Daemon Daemon Set hostname [ci-4186.1.0-a-a27a4db638] Jan 30 13:25:02.330813 waagent[1826]: 2025-01-30T13:25:02.330735Z INFO Daemon Daemon Publish hostname [ci-4186.1.0-a-a27a4db638] Jan 30 13:25:02.337626 waagent[1826]: 2025-01-30T13:25:02.337522Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 30 13:25:02.345428 waagent[1826]: 2025-01-30T13:25:02.345359Z INFO Daemon Daemon Primary interface is [eth0] Jan 30 13:25:02.368637 systemd-networkd[1537]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:25:02.368652 systemd-networkd[1537]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:25:02.368683 systemd-networkd[1537]: eth0: DHCP lease lost Jan 30 13:25:02.369312 waagent[1826]: 2025-01-30T13:25:02.369215Z INFO Daemon Daemon Create user account if not exists Jan 30 13:25:02.375282 waagent[1826]: 2025-01-30T13:25:02.375207Z INFO Daemon Daemon User core already exists, skip useradd Jan 30 13:25:02.380946 systemd-networkd[1537]: eth0: DHCPv6 lease lost Jan 30 13:25:02.381415 waagent[1826]: 2025-01-30T13:25:02.381172Z INFO Daemon Daemon Configure sudoer Jan 30 13:25:02.386209 waagent[1826]: 2025-01-30T13:25:02.386138Z INFO Daemon Daemon Configure sshd Jan 30 13:25:02.391136 waagent[1826]: 2025-01-30T13:25:02.391068Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 30 13:25:02.405646 waagent[1826]: 2025-01-30T13:25:02.405569Z INFO Daemon Daemon Deploy ssh public key. Jan 30 13:25:02.427021 systemd-networkd[1537]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 13:25:02.517628 login[1831]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:25:02.521814 systemd-logind[1690]: New session 2 of user core. Jan 30 13:25:02.528118 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:25:03.498443 waagent[1826]: 2025-01-30T13:25:03.498379Z INFO Daemon Daemon Provisioning complete Jan 30 13:25:03.519348 waagent[1826]: 2025-01-30T13:25:03.519296Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 30 13:25:03.526401 waagent[1826]: 2025-01-30T13:25:03.526339Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 30 13:25:03.536728 waagent[1826]: 2025-01-30T13:25:03.536649Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 30 13:25:03.667497 waagent[1906]: 2025-01-30T13:25:03.666989Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 30 13:25:03.667497 waagent[1906]: 2025-01-30T13:25:03.667140Z INFO ExtHandler ExtHandler OS: flatcar 4186.1.0 Jan 30 13:25:03.667497 waagent[1906]: 2025-01-30T13:25:03.667192Z INFO ExtHandler ExtHandler Python: 3.11.10 Jan 30 13:25:03.679944 waagent[1906]: 2025-01-30T13:25:03.677871Z INFO ExtHandler ExtHandler Distro: flatcar-4186.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 30 13:25:03.679944 waagent[1906]: 2025-01-30T13:25:03.678121Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:25:03.679944 waagent[1906]: 2025-01-30T13:25:03.678181Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:25:03.690495 waagent[1906]: 2025-01-30T13:25:03.690427Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 13:25:03.699241 waagent[1906]: 2025-01-30T13:25:03.699194Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 30 13:25:03.699938 waagent[1906]: 2025-01-30T13:25:03.699877Z INFO ExtHandler Jan 30 13:25:03.700104 waagent[1906]: 2025-01-30T13:25:03.700069Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 58436183-83e1-4d74-aa37-385f00c793d0 eTag: 7958933129871429195 source: Fabric] Jan 30 13:25:03.700476 waagent[1906]: 2025-01-30T13:25:03.700440Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 30 13:25:03.701163 waagent[1906]: 2025-01-30T13:25:03.701119Z INFO ExtHandler Jan 30 13:25:03.701298 waagent[1906]: 2025-01-30T13:25:03.701268Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 30 13:25:03.705697 waagent[1906]: 2025-01-30T13:25:03.705665Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 30 13:25:03.786804 waagent[1906]: 2025-01-30T13:25:03.786658Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C4AA6510ACDE9F57542EEDA901E4431CBC76C329', 'hasPrivateKey': True} Jan 30 13:25:03.787243 waagent[1906]: 2025-01-30T13:25:03.787196Z INFO ExtHandler Downloaded certificate {'thumbprint': 'BFF5C429F15100E97EC88828383BB9FABFAAA3FE', 'hasPrivateKey': False} Jan 30 13:25:03.787657 waagent[1906]: 2025-01-30T13:25:03.787616Z INFO ExtHandler Fetch goal state completed Jan 30 13:25:03.807406 waagent[1906]: 2025-01-30T13:25:03.807350Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1906 Jan 30 13:25:03.807555 waagent[1906]: 2025-01-30T13:25:03.807518Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 30 13:25:03.809134 waagent[1906]: 2025-01-30T13:25:03.809090Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4186.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 30 13:25:03.809505 waagent[1906]: 2025-01-30T13:25:03.809466Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 30 13:25:03.820452 waagent[1906]: 2025-01-30T13:25:03.820409Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 30 13:25:03.820645 waagent[1906]: 2025-01-30T13:25:03.820606Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 30 13:25:03.826772 waagent[1906]: 2025-01-30T13:25:03.826734Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 30 13:25:03.833307 systemd[1]: Reloading requested from client PID 1921 ('systemctl') (unit waagent.service)... Jan 30 13:25:03.833325 systemd[1]: Reloading... Jan 30 13:25:03.910071 zram_generator::config[1954]: No configuration found. Jan 30 13:25:04.017745 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:25:04.099003 systemd[1]: Reloading finished in 265 ms. Jan 30 13:25:04.126936 waagent[1906]: 2025-01-30T13:25:04.123935Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 30 13:25:04.130040 systemd[1]: Reloading requested from client PID 2009 ('systemctl') (unit waagent.service)... Jan 30 13:25:04.130056 systemd[1]: Reloading... Jan 30 13:25:04.207948 zram_generator::config[2043]: No configuration found. Jan 30 13:25:04.306627 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:25:04.386844 systemd[1]: Reloading finished in 256 ms. Jan 30 13:25:04.411381 waagent[1906]: 2025-01-30T13:25:04.410535Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 30 13:25:04.411381 waagent[1906]: 2025-01-30T13:25:04.410761Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 30 13:25:04.494322 waagent[1906]: 2025-01-30T13:25:04.494246Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 30 13:25:04.495079 waagent[1906]: 2025-01-30T13:25:04.495033Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 30 13:25:04.495942 waagent[1906]: 2025-01-30T13:25:04.495868Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 30 13:25:04.496042 waagent[1906]: 2025-01-30T13:25:04.495996Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:25:04.496127 waagent[1906]: 2025-01-30T13:25:04.496096Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:25:04.496357 waagent[1906]: 2025-01-30T13:25:04.496314Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 30 13:25:04.496855 waagent[1906]: 2025-01-30T13:25:04.496796Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 30 13:25:04.497090 waagent[1906]: 2025-01-30T13:25:04.496980Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 30 13:25:04.497090 waagent[1906]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 30 13:25:04.497090 waagent[1906]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 30 13:25:04.497090 waagent[1906]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 30 13:25:04.497090 waagent[1906]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:25:04.497090 waagent[1906]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:25:04.497090 waagent[1906]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:25:04.497252 waagent[1906]: 2025-01-30T13:25:04.497152Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:25:04.497252 waagent[1906]: 2025-01-30T13:25:04.497219Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:25:04.497594 waagent[1906]: 2025-01-30T13:25:04.497344Z INFO EnvHandler ExtHandler Configure routes Jan 30 13:25:04.497680 waagent[1906]: 2025-01-30T13:25:04.497632Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 30 13:25:04.498289 waagent[1906]: 2025-01-30T13:25:04.498187Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 30 13:25:04.498579 waagent[1906]: 2025-01-30T13:25:04.498506Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 30 13:25:04.498745 waagent[1906]: 2025-01-30T13:25:04.498576Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 30 13:25:04.499789 waagent[1906]: 2025-01-30T13:25:04.499135Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 30 13:25:04.500202 waagent[1906]: 2025-01-30T13:25:04.500154Z INFO EnvHandler ExtHandler Gateway:None Jan 30 13:25:04.500823 waagent[1906]: 2025-01-30T13:25:04.500762Z INFO EnvHandler ExtHandler Routes:None Jan 30 13:25:04.508470 waagent[1906]: 2025-01-30T13:25:04.507317Z INFO ExtHandler ExtHandler Jan 30 13:25:04.508470 waagent[1906]: 2025-01-30T13:25:04.507433Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 194a1b8b-afab-4e35-8edd-a6c8e5bf5b18 correlation e05a390d-bcc3-4426-84f6-fb34e38b3e87 created: 2025-01-30T13:24:17.558861Z] Jan 30 13:25:04.508470 waagent[1906]: 2025-01-30T13:25:04.507785Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 30 13:25:04.508470 waagent[1906]: 2025-01-30T13:25:04.508378Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 30 13:25:04.518126 waagent[1906]: 2025-01-30T13:25:04.518060Z INFO MonitorHandler ExtHandler Network interfaces: Jan 30 13:25:04.518126 waagent[1906]: Executing ['ip', '-a', '-o', 'link']: Jan 30 13:25:04.518126 waagent[1906]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 30 13:25:04.518126 waagent[1906]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:0a:b0 brd ff:ff:ff:ff:ff:ff Jan 30 13:25:04.518126 waagent[1906]: 3: enP48178s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:0a:b0 brd ff:ff:ff:ff:ff:ff\ altname enP48178p0s2 Jan 30 13:25:04.518126 waagent[1906]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 30 13:25:04.518126 waagent[1906]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 30 13:25:04.518126 waagent[1906]: 2: eth0 inet 10.200.20.21/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 30 13:25:04.518126 waagent[1906]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 30 13:25:04.518126 waagent[1906]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 30 13:25:04.518126 waagent[1906]: 2: eth0 inet6 fe80::222:48ff:fe7b:ab0/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 13:25:04.518126 waagent[1906]: 3: enP48178s1 inet6 fe80::222:48ff:fe7b:ab0/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 13:25:04.553811 waagent[1906]: 2025-01-30T13:25:04.553750Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C4F29791-D5AB-4F17-AF33-20A6ED53FDAD;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 30 13:25:04.561729 waagent[1906]: 2025-01-30T13:25:04.561644Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 30 13:25:04.561729 waagent[1906]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:25:04.561729 waagent[1906]: pkts bytes target prot opt in out source destination Jan 30 13:25:04.561729 waagent[1906]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:25:04.561729 waagent[1906]: pkts bytes target prot opt in out source destination Jan 30 13:25:04.561729 waagent[1906]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:25:04.561729 waagent[1906]: pkts bytes target prot opt in out source destination Jan 30 13:25:04.561729 waagent[1906]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 13:25:04.561729 waagent[1906]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 13:25:04.561729 waagent[1906]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 13:25:04.565522 waagent[1906]: 2025-01-30T13:25:04.565116Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 30 13:25:04.565522 waagent[1906]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:25:04.565522 waagent[1906]: pkts bytes target prot opt in out source destination Jan 30 13:25:04.565522 waagent[1906]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:25:04.565522 waagent[1906]: pkts bytes target prot opt in out source destination Jan 30 13:25:04.565522 waagent[1906]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:25:04.565522 waagent[1906]: pkts bytes target prot opt in out source destination Jan 30 13:25:04.565522 waagent[1906]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 13:25:04.565522 waagent[1906]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 13:25:04.565522 waagent[1906]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 13:25:04.565522 waagent[1906]: 2025-01-30T13:25:04.565409Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 30 13:25:12.350571 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:25:12.361101 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:25:12.457606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:12.472413 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:25:12.518568 kubelet[2136]: E0130 13:25:12.518521 2136 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:25:12.521907 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:25:12.522072 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:25:22.770603 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:25:22.778168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:25:22.881557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:22.892194 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:25:22.975763 kubelet[2152]: E0130 13:25:22.975668 2152 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:25:22.978132 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:25:22.978277 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:25:23.571039 chronyd[1668]: Selected source PHC0 Jan 30 13:25:33.020633 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 13:25:33.029091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:25:33.277955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:33.282024 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:25:33.322062 kubelet[2168]: E0130 13:25:33.322022 2168 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:25:33.324466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:25:33.324782 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:25:43.520652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 13:25:43.530128 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:25:43.707924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:43.712701 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:25:43.751522 kubelet[2185]: E0130 13:25:43.751446 2185 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:25:43.754113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:25:43.754260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:25:45.247038 update_engine[1699]: I20250130 13:25:45.246953 1699 update_attempter.cc:509] Updating boot flags... Jan 30 13:25:45.310956 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2208) Jan 30 13:25:45.421215 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2212) Jan 30 13:25:47.061081 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 30 13:25:53.770561 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 13:25:53.777121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:25:54.064669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:54.079197 (kubelet)[2315]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:25:54.117182 kubelet[2315]: E0130 13:25:54.117124 2315 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:25:54.119128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:25:54.119251 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:25:54.751993 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:25:54.753165 systemd[1]: Started sshd@0-10.200.20.21:22-10.200.16.10:39500.service - OpenSSH per-connection server daemon (10.200.16.10:39500). Jan 30 13:25:55.235288 sshd[2324]: Accepted publickey for core from 10.200.16.10 port 39500 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:25:55.236597 sshd-session[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:25:55.240620 systemd-logind[1690]: New session 3 of user core. Jan 30 13:25:55.249178 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:25:55.647327 systemd[1]: Started sshd@1-10.200.20.21:22-10.200.16.10:39506.service - OpenSSH per-connection server daemon (10.200.16.10:39506). Jan 30 13:25:56.075207 sshd[2329]: Accepted publickey for core from 10.200.16.10 port 39506 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:25:56.076491 sshd-session[2329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:25:56.080944 systemd-logind[1690]: New session 4 of user core. Jan 30 13:25:56.093139 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:25:56.407047 sshd[2331]: Connection closed by 10.200.16.10 port 39506 Jan 30 13:25:56.406107 sshd-session[2329]: pam_unix(sshd:session): session closed for user core Jan 30 13:25:56.408891 systemd[1]: sshd@1-10.200.20.21:22-10.200.16.10:39506.service: Deactivated successfully. Jan 30 13:25:56.410617 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:25:56.411795 systemd-logind[1690]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:25:56.412784 systemd-logind[1690]: Removed session 4. Jan 30 13:25:56.486089 systemd[1]: Started sshd@2-10.200.20.21:22-10.200.16.10:39912.service - OpenSSH per-connection server daemon (10.200.16.10:39912). Jan 30 13:25:56.902118 sshd[2336]: Accepted publickey for core from 10.200.16.10 port 39912 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:25:56.903445 sshd-session[2336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:25:56.907272 systemd-logind[1690]: New session 5 of user core. Jan 30 13:25:56.915103 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:25:57.218742 sshd[2338]: Connection closed by 10.200.16.10 port 39912 Jan 30 13:25:57.218584 sshd-session[2336]: pam_unix(sshd:session): session closed for user core Jan 30 13:25:57.221495 systemd[1]: sshd@2-10.200.20.21:22-10.200.16.10:39912.service: Deactivated successfully. Jan 30 13:25:57.223181 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:25:57.224737 systemd-logind[1690]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:25:57.225655 systemd-logind[1690]: Removed session 5. Jan 30 13:25:57.294087 systemd[1]: Started sshd@3-10.200.20.21:22-10.200.16.10:39916.service - OpenSSH per-connection server daemon (10.200.16.10:39916). Jan 30 13:25:57.720259 sshd[2343]: Accepted publickey for core from 10.200.16.10 port 39916 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:25:57.722593 sshd-session[2343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:25:57.727605 systemd-logind[1690]: New session 6 of user core. Jan 30 13:25:57.734119 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:25:58.040925 sshd[2345]: Connection closed by 10.200.16.10 port 39916 Jan 30 13:25:58.040347 sshd-session[2343]: pam_unix(sshd:session): session closed for user core Jan 30 13:25:58.043797 systemd[1]: sshd@3-10.200.20.21:22-10.200.16.10:39916.service: Deactivated successfully. Jan 30 13:25:58.045339 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:25:58.046144 systemd-logind[1690]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:25:58.047158 systemd-logind[1690]: Removed session 6. Jan 30 13:25:58.118463 systemd[1]: Started sshd@4-10.200.20.21:22-10.200.16.10:39922.service - OpenSSH per-connection server daemon (10.200.16.10:39922). Jan 30 13:25:58.531381 sshd[2350]: Accepted publickey for core from 10.200.16.10 port 39922 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:25:58.532741 sshd-session[2350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:25:58.536884 systemd-logind[1690]: New session 7 of user core. Jan 30 13:25:58.545083 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:25:58.799631 sudo[2353]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:25:58.799904 sudo[2353]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:25:58.819012 sudo[2353]: pam_unix(sudo:session): session closed for user root Jan 30 13:25:58.888047 sshd[2352]: Connection closed by 10.200.16.10 port 39922 Jan 30 13:25:58.887243 sshd-session[2350]: pam_unix(sshd:session): session closed for user core Jan 30 13:25:58.890394 systemd[1]: sshd@4-10.200.20.21:22-10.200.16.10:39922.service: Deactivated successfully. Jan 30 13:25:58.892230 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:25:58.895349 systemd-logind[1690]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:25:58.896493 systemd-logind[1690]: Removed session 7. Jan 30 13:25:58.974191 systemd[1]: Started sshd@5-10.200.20.21:22-10.200.16.10:39926.service - OpenSSH per-connection server daemon (10.200.16.10:39926). Jan 30 13:25:59.403150 sshd[2358]: Accepted publickey for core from 10.200.16.10 port 39926 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:25:59.404488 sshd-session[2358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:25:59.408513 systemd-logind[1690]: New session 8 of user core. Jan 30 13:25:59.419160 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:25:59.650318 sudo[2362]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:25:59.651028 sudo[2362]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:25:59.654507 sudo[2362]: pam_unix(sudo:session): session closed for user root Jan 30 13:25:59.660019 sudo[2361]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:25:59.660309 sudo[2361]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:25:59.672227 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:25:59.699940 augenrules[2384]: No rules Jan 30 13:25:59.700972 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:25:59.701146 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:25:59.703229 sudo[2361]: pam_unix(sudo:session): session closed for user root Jan 30 13:25:59.777692 sshd[2360]: Connection closed by 10.200.16.10 port 39926 Jan 30 13:25:59.778252 sshd-session[2358]: pam_unix(sshd:session): session closed for user core Jan 30 13:25:59.782864 systemd[1]: sshd@5-10.200.20.21:22-10.200.16.10:39926.service: Deactivated successfully. Jan 30 13:25:59.782965 systemd-logind[1690]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:25:59.785106 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:25:59.785980 systemd-logind[1690]: Removed session 8. Jan 30 13:25:59.861748 systemd[1]: Started sshd@6-10.200.20.21:22-10.200.16.10:39932.service - OpenSSH per-connection server daemon (10.200.16.10:39932). Jan 30 13:26:00.294523 sshd[2392]: Accepted publickey for core from 10.200.16.10 port 39932 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:26:00.295882 sshd-session[2392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:26:00.299881 systemd-logind[1690]: New session 9 of user core. Jan 30 13:26:00.311105 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:26:00.540473 sudo[2395]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:26:00.540763 sudo[2395]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:26:00.957201 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:26:00.957293 (dockerd)[2413]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:26:01.262053 dockerd[2413]: time="2025-01-30T13:26:01.261763352Z" level=info msg="Starting up" Jan 30 13:26:01.392626 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport81354657-merged.mount: Deactivated successfully. Jan 30 13:26:01.461491 dockerd[2413]: time="2025-01-30T13:26:01.461245805Z" level=info msg="Loading containers: start." Jan 30 13:26:01.636950 kernel: Initializing XFRM netlink socket Jan 30 13:26:01.696746 systemd-networkd[1537]: docker0: Link UP Jan 30 13:26:01.733334 dockerd[2413]: time="2025-01-30T13:26:01.733293753Z" level=info msg="Loading containers: done." Jan 30 13:26:01.756371 dockerd[2413]: time="2025-01-30T13:26:01.756259265Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:26:01.756552 dockerd[2413]: time="2025-01-30T13:26:01.756389225Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 13:26:01.756552 dockerd[2413]: time="2025-01-30T13:26:01.756521025Z" level=info msg="Daemon has completed initialization" Jan 30 13:26:01.805329 dockerd[2413]: time="2025-01-30T13:26:01.805185609Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:26:01.805839 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:26:03.082725 containerd[1750]: time="2025-01-30T13:26:03.082447578Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:26:04.020819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2917423561.mount: Deactivated successfully. Jan 30 13:26:04.270460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 13:26:04.277173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:26:04.411691 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:26:04.416805 (kubelet)[2627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:26:04.466995 kubelet[2627]: E0130 13:26:04.466907 2627 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:26:04.469630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:26:04.469796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:26:06.313254 containerd[1750]: time="2025-01-30T13:26:06.313194840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:06.316153 containerd[1750]: time="2025-01-30T13:26:06.316102198Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864935" Jan 30 13:26:06.319926 containerd[1750]: time="2025-01-30T13:26:06.319838637Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:06.324584 containerd[1750]: time="2025-01-30T13:26:06.324507195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:06.326444 containerd[1750]: time="2025-01-30T13:26:06.325952194Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 3.243464456s" Jan 30 13:26:06.326444 containerd[1750]: time="2025-01-30T13:26:06.326003714Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 30 13:26:06.349158 containerd[1750]: time="2025-01-30T13:26:06.348897504Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:26:09.078578 containerd[1750]: time="2025-01-30T13:26:09.078523810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:09.082935 containerd[1750]: time="2025-01-30T13:26:09.082864889Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901561" Jan 30 13:26:09.086671 containerd[1750]: time="2025-01-30T13:26:09.086621127Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:09.096925 containerd[1750]: time="2025-01-30T13:26:09.096869163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:09.098213 containerd[1750]: time="2025-01-30T13:26:09.097813402Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 2.748859578s" Jan 30 13:26:09.098213 containerd[1750]: time="2025-01-30T13:26:09.097851922Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 30 13:26:09.120654 containerd[1750]: time="2025-01-30T13:26:09.120434472Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:26:10.986104 containerd[1750]: time="2025-01-30T13:26:10.986058790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:10.988473 containerd[1750]: time="2025-01-30T13:26:10.988425789Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164338" Jan 30 13:26:10.992447 containerd[1750]: time="2025-01-30T13:26:10.992400267Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:10.998410 containerd[1750]: time="2025-01-30T13:26:10.998347425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:10.999527 containerd[1750]: time="2025-01-30T13:26:10.999374384Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.878897032s" Jan 30 13:26:10.999527 containerd[1750]: time="2025-01-30T13:26:10.999411584Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 30 13:26:11.023616 containerd[1750]: time="2025-01-30T13:26:11.023541814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:26:12.059705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912473151.mount: Deactivated successfully. Jan 30 13:26:12.364039 containerd[1750]: time="2025-01-30T13:26:12.363476460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:12.366729 containerd[1750]: time="2025-01-30T13:26:12.366659978Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662712" Jan 30 13:26:12.370315 containerd[1750]: time="2025-01-30T13:26:12.370266856Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:12.374741 containerd[1750]: time="2025-01-30T13:26:12.374687173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:12.375405 containerd[1750]: time="2025-01-30T13:26:12.375240973Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.351628079s" Jan 30 13:26:12.375405 containerd[1750]: time="2025-01-30T13:26:12.375275253Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 30 13:26:12.396661 containerd[1750]: time="2025-01-30T13:26:12.396419721Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:26:13.150165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2678296052.mount: Deactivated successfully. Jan 30 13:26:14.459955 containerd[1750]: time="2025-01-30T13:26:14.459464635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:14.461701 containerd[1750]: time="2025-01-30T13:26:14.461456154Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 30 13:26:14.466386 containerd[1750]: time="2025-01-30T13:26:14.466334191Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:14.471238 containerd[1750]: time="2025-01-30T13:26:14.471156188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:14.472305 containerd[1750]: time="2025-01-30T13:26:14.472160868Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.075699467s" Jan 30 13:26:14.472305 containerd[1750]: time="2025-01-30T13:26:14.472201348Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 13:26:14.492191 containerd[1750]: time="2025-01-30T13:26:14.492152936Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:26:14.520447 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 30 13:26:14.528112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:26:14.625694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:26:14.637215 (kubelet)[2767]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:26:14.673797 kubelet[2767]: E0130 13:26:14.673754 2767 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:26:14.676562 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:26:14.676716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:26:15.431532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1888279312.mount: Deactivated successfully. Jan 30 13:26:15.472713 containerd[1750]: time="2025-01-30T13:26:15.471983863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:15.474768 containerd[1750]: time="2025-01-30T13:26:15.474717341Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 30 13:26:15.483638 containerd[1750]: time="2025-01-30T13:26:15.483594856Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:15.489143 containerd[1750]: time="2025-01-30T13:26:15.489092173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:15.490288 containerd[1750]: time="2025-01-30T13:26:15.489959532Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 997.768636ms" Jan 30 13:26:15.490288 containerd[1750]: time="2025-01-30T13:26:15.489990612Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 30 13:26:15.509954 containerd[1750]: time="2025-01-30T13:26:15.509872721Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:26:16.316116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount383902979.mount: Deactivated successfully. Jan 30 13:26:21.270953 containerd[1750]: time="2025-01-30T13:26:21.270251633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:21.307960 containerd[1750]: time="2025-01-30T13:26:21.307871099Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jan 30 13:26:21.330254 containerd[1750]: time="2025-01-30T13:26:21.330188091Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:21.339079 containerd[1750]: time="2025-01-30T13:26:21.338967847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:26:21.340483 containerd[1750]: time="2025-01-30T13:26:21.340183967Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 5.830273526s" Jan 30 13:26:21.340483 containerd[1750]: time="2025-01-30T13:26:21.340221847Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 30 13:26:24.770480 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 30 13:26:24.780458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:26:25.284326 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:26:25.294232 (kubelet)[2897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:26:25.343924 kubelet[2897]: E0130 13:26:25.342021 2897 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:26:25.344994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:26:25.345135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:26:27.771251 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:26:27.781183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:26:27.794177 systemd[1]: Reloading requested from client PID 2912 ('systemctl') (unit session-9.scope)... Jan 30 13:26:27.794320 systemd[1]: Reloading... Jan 30 13:26:27.912946 zram_generator::config[2948]: No configuration found. Jan 30 13:26:28.009238 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:26:28.088625 systemd[1]: Reloading finished in 293 ms. Jan 30 13:26:28.139194 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:26:28.139274 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:26:28.139787 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:26:28.144247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:26:30.903829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:26:30.917266 (kubelet)[3019]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:26:30.955645 kubelet[3019]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:26:30.955645 kubelet[3019]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:26:30.955645 kubelet[3019]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:26:30.956704 kubelet[3019]: I0130 13:26:30.956654 3019 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:26:34.174292 kubelet[3019]: I0130 13:26:34.173295 3019 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:26:34.174292 kubelet[3019]: I0130 13:26:34.173333 3019 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:26:34.174292 kubelet[3019]: I0130 13:26:34.173585 3019 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:26:34.186274 kubelet[3019]: E0130 13:26:34.186232 3019 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:34.186642 kubelet[3019]: I0130 13:26:34.186538 3019 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:26:34.195399 kubelet[3019]: I0130 13:26:34.195369 3019 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:26:34.195585 kubelet[3019]: I0130 13:26:34.195551 3019 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:26:34.195748 kubelet[3019]: I0130 13:26:34.195582 3019 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-a27a4db638","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:26:34.195879 kubelet[3019]: I0130 13:26:34.195757 3019 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:26:34.195879 kubelet[3019]: I0130 13:26:34.195767 3019 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:26:34.195958 kubelet[3019]: I0130 13:26:34.195883 3019 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:26:34.196918 kubelet[3019]: I0130 13:26:34.196892 3019 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:26:34.197131 kubelet[3019]: I0130 13:26:34.197111 3019 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:26:34.197158 kubelet[3019]: I0130 13:26:34.197154 3019 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:26:34.197179 kubelet[3019]: I0130 13:26:34.197174 3019 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:26:34.199489 kubelet[3019]: W0130 13:26:34.199446 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-a27a4db638&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:34.199527 kubelet[3019]: E0130 13:26:34.199501 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-a27a4db638&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:34.199927 kubelet[3019]: W0130 13:26:34.199795 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.21:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:34.199927 kubelet[3019]: E0130 13:26:34.199836 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.21:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:34.200259 kubelet[3019]: I0130 13:26:34.200230 3019 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:26:34.201266 kubelet[3019]: I0130 13:26:34.200400 3019 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:26:34.201266 kubelet[3019]: W0130 13:26:34.200449 3019 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:26:34.201785 kubelet[3019]: I0130 13:26:34.201611 3019 server.go:1264] "Started kubelet" Jan 30 13:26:34.206692 kubelet[3019]: I0130 13:26:34.206640 3019 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:26:34.207951 kubelet[3019]: I0130 13:26:34.207138 3019 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:26:34.207951 kubelet[3019]: I0130 13:26:34.207494 3019 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:26:34.207951 kubelet[3019]: E0130 13:26:34.207731 3019 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.21:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.21:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-a-a27a4db638.181f7b56dd7ba1d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-a27a4db638,UID:ci-4186.1.0-a-a27a4db638,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-a27a4db638,},FirstTimestamp:2025-01-30 13:26:34.201571795 +0000 UTC m=+3.281287923,LastTimestamp:2025-01-30 13:26:34.201571795 +0000 UTC m=+3.281287923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-a27a4db638,}" Jan 30 13:26:34.209463 kubelet[3019]: I0130 13:26:34.209428 3019 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:26:34.211526 kubelet[3019]: I0130 13:26:34.211504 3019 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:26:34.213931 kubelet[3019]: E0130 13:26:34.213889 3019 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-a27a4db638\" not found" Jan 30 13:26:34.214559 kubelet[3019]: I0130 13:26:34.214287 3019 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:26:34.214559 kubelet[3019]: I0130 13:26:34.214416 3019 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:26:34.216034 kubelet[3019]: I0130 13:26:34.216017 3019 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:26:34.216481 kubelet[3019]: W0130 13:26:34.216441 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:34.216584 kubelet[3019]: E0130 13:26:34.216571 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:34.216795 kubelet[3019]: E0130 13:26:34.216775 3019 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:26:34.217342 kubelet[3019]: E0130 13:26:34.217313 3019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-a27a4db638?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="200ms" Jan 30 13:26:34.217718 kubelet[3019]: I0130 13:26:34.217698 3019 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:26:34.219210 kubelet[3019]: I0130 13:26:34.219191 3019 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:26:34.219297 kubelet[3019]: I0130 13:26:34.219287 3019 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:26:34.288776 kubelet[3019]: I0130 13:26:34.288753 3019 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:26:34.289153 kubelet[3019]: I0130 13:26:34.288939 3019 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:26:34.289153 kubelet[3019]: I0130 13:26:34.288962 3019 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:26:34.316631 kubelet[3019]: I0130 13:26:34.316600 3019 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-a27a4db638" Jan 30 13:26:34.317030 kubelet[3019]: E0130 13:26:34.317004 3019 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-4186.1.0-a-a27a4db638" Jan 30 13:26:34.418814 kubelet[3019]: E0130 13:26:34.418773 3019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-a27a4db638?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="400ms" Jan 30 13:26:34.520013 kubelet[3019]: I0130 13:26:34.519612 3019 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-a27a4db638" Jan 30 13:26:34.520013 kubelet[3019]: E0130 13:26:34.519891 3019 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-4186.1.0-a-a27a4db638" Jan 30 13:26:34.820006 kubelet[3019]: E0130 13:26:34.819881 3019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-a27a4db638?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="800ms" Jan 30 13:26:34.921828 kubelet[3019]: I0130 13:26:34.921794 3019 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-a27a4db638" Jan 30 13:26:34.922295 kubelet[3019]: E0130 13:26:34.922261 3019 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-4186.1.0-a-a27a4db638" Jan 30 13:26:35.078065 kubelet[3019]: I0130 13:26:35.077389 3019 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:26:35.078694 kubelet[3019]: I0130 13:26:35.078659 3019 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:26:35.078694 kubelet[3019]: I0130 13:26:35.078697 3019 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:26:35.078785 kubelet[3019]: I0130 13:26:35.078715 3019 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:26:35.078785 kubelet[3019]: E0130 13:26:35.078753 3019 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:26:35.079671 kubelet[3019]: W0130 13:26:35.079645 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:35.079989 kubelet[3019]: E0130 13:26:35.079780 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:35.113346 kubelet[3019]: I0130 13:26:35.113309 3019 policy_none.go:49] "None policy: Start" Jan 30 13:26:35.114099 kubelet[3019]: I0130 13:26:35.114078 3019 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:26:35.114138 kubelet[3019]: I0130 13:26:35.114107 3019 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:26:35.179831 kubelet[3019]: E0130 13:26:35.179798 3019 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:26:35.272050 kubelet[3019]: W0130 13:26:35.271989 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.21:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:35.272050 kubelet[3019]: E0130 13:26:35.272054 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.21:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:35.571131 kubelet[3019]: E0130 13:26:35.380442 3019 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:26:35.571131 kubelet[3019]: W0130 13:26:35.483180 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-a27a4db638&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:35.571131 kubelet[3019]: E0130 13:26:35.483238 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-a27a4db638&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:35.621164 kubelet[3019]: E0130 13:26:35.621118 3019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-a27a4db638?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="1.6s" Jan 30 13:26:35.724513 kubelet[3019]: I0130 13:26:35.724478 3019 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-a27a4db638" Jan 30 13:26:35.724830 kubelet[3019]: E0130 13:26:35.724799 3019 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-4186.1.0-a-a27a4db638" Jan 30 13:26:35.780959 kubelet[3019]: E0130 13:26:35.780937 3019 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:26:35.807443 kubelet[3019]: W0130 13:26:35.807387 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:35.807443 kubelet[3019]: E0130 13:26:35.807426 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:35.827304 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:26:35.835604 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:26:35.838545 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:26:35.845985 kubelet[3019]: I0130 13:26:35.845735 3019 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:26:35.846109 kubelet[3019]: I0130 13:26:35.845981 3019 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:26:35.846109 kubelet[3019]: I0130 13:26:35.846083 3019 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:26:35.847756 kubelet[3019]: E0130 13:26:35.847646 3019 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.0-a-a27a4db638\" not found" Jan 30 13:26:36.224799 kubelet[3019]: E0130 13:26:36.224696 3019 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:36.395667 kubelet[3019]: W0130 13:26:36.395629 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:36.395667 kubelet[3019]: E0130 13:26:36.395675 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:36.581580 kubelet[3019]: I0130 13:26:36.581519 3019 topology_manager.go:215] "Topology Admit Handler" podUID="b8c60225bcc7304139b6d885b79f10f3" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:36.583273 kubelet[3019]: I0130 13:26:36.583132 3019 topology_manager.go:215] "Topology Admit Handler" podUID="29386032108190daf17029bd10554d49" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:36.584803 kubelet[3019]: I0130 13:26:36.584586 3019 topology_manager.go:215] "Topology Admit Handler" podUID="958bb0433e55311747aef795dba066be" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:36.591385 systemd[1]: Created slice kubepods-burstable-podb8c60225bcc7304139b6d885b79f10f3.slice - libcontainer container kubepods-burstable-podb8c60225bcc7304139b6d885b79f10f3.slice. Jan 30 13:26:36.611297 systemd[1]: Created slice kubepods-burstable-pod29386032108190daf17029bd10554d49.slice - libcontainer container kubepods-burstable-pod29386032108190daf17029bd10554d49.slice. Jan 30 13:26:36.624380 systemd[1]: Created slice kubepods-burstable-pod958bb0433e55311747aef795dba066be.slice - libcontainer container kubepods-burstable-pod958bb0433e55311747aef795dba066be.slice. Jan 30 13:26:36.628119 kubelet[3019]: I0130 13:26:36.628085 3019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/958bb0433e55311747aef795dba066be-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-a27a4db638\" (UID: \"958bb0433e55311747aef795dba066be\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:36.628119 kubelet[3019]: I0130 13:26:36.628121 3019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8c60225bcc7304139b6d885b79f10f3-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-a27a4db638\" (UID: \"b8c60225bcc7304139b6d885b79f10f3\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:36.628260 kubelet[3019]: I0130 13:26:36.628139 3019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8c60225bcc7304139b6d885b79f10f3-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-a27a4db638\" (UID: \"b8c60225bcc7304139b6d885b79f10f3\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:36.628260 kubelet[3019]: I0130 13:26:36.628167 3019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8c60225bcc7304139b6d885b79f10f3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-a27a4db638\" (UID: \"b8c60225bcc7304139b6d885b79f10f3\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:36.628260 kubelet[3019]: I0130 13:26:36.628194 3019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29386032108190daf17029bd10554d49-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-a27a4db638\" (UID: \"29386032108190daf17029bd10554d49\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:36.628260 kubelet[3019]: I0130 13:26:36.628213 3019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29386032108190daf17029bd10554d49-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-a27a4db638\" (UID: \"29386032108190daf17029bd10554d49\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:36.628260 kubelet[3019]: I0130 13:26:36.628229 3019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29386032108190daf17029bd10554d49-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-a27a4db638\" (UID: \"29386032108190daf17029bd10554d49\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:36.628367 kubelet[3019]: I0130 13:26:36.628246 3019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/29386032108190daf17029bd10554d49-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-a27a4db638\" (UID: \"29386032108190daf17029bd10554d49\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:36.628367 kubelet[3019]: I0130 13:26:36.628262 3019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29386032108190daf17029bd10554d49-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-a27a4db638\" (UID: \"29386032108190daf17029bd10554d49\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:36.910209 containerd[1750]: time="2025-01-30T13:26:36.909986109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-a27a4db638,Uid:b8c60225bcc7304139b6d885b79f10f3,Namespace:kube-system,Attempt:0,}" Jan 30 13:26:36.922252 containerd[1750]: time="2025-01-30T13:26:36.922209461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-a27a4db638,Uid:29386032108190daf17029bd10554d49,Namespace:kube-system,Attempt:0,}" Jan 30 13:26:36.926938 containerd[1750]: time="2025-01-30T13:26:36.926867338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-a27a4db638,Uid:958bb0433e55311747aef795dba066be,Namespace:kube-system,Attempt:0,}" Jan 30 13:26:37.221732 kubelet[3019]: E0130 13:26:37.221615 3019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-a27a4db638?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="3.2s" Jan 30 13:26:37.326967 kubelet[3019]: I0130 13:26:37.326691 3019 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-a27a4db638" Jan 30 13:26:37.327269 kubelet[3019]: E0130 13:26:37.327057 3019 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-4186.1.0-a-a27a4db638" Jan 30 13:26:37.377035 kubelet[3019]: W0130 13:26:37.376999 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.21:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:37.377035 kubelet[3019]: E0130 13:26:37.377041 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.21:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:37.489886 kubelet[3019]: W0130 13:26:37.489751 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-a27a4db638&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:37.489886 kubelet[3019]: E0130 13:26:37.489794 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-a27a4db638&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:37.554425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2988686622.mount: Deactivated successfully. Jan 30 13:26:37.586876 containerd[1750]: time="2025-01-30T13:26:37.586081240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:26:37.677816 containerd[1750]: time="2025-01-30T13:26:37.677677062Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 30 13:26:37.723960 containerd[1750]: time="2025-01-30T13:26:37.723851272Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:26:37.770383 containerd[1750]: time="2025-01-30T13:26:37.769720803Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:26:37.818216 containerd[1750]: time="2025-01-30T13:26:37.818046813Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:26:37.867470 containerd[1750]: time="2025-01-30T13:26:37.867412661Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:26:37.930021 containerd[1750]: time="2025-01-30T13:26:37.929405622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:26:37.930347 containerd[1750]: time="2025-01-30T13:26:37.930307381Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.020243232s" Jan 30 13:26:37.932339 containerd[1750]: time="2025-01-30T13:26:37.932284460Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.010002839s" Jan 30 13:26:37.974626 containerd[1750]: time="2025-01-30T13:26:37.974549993Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:26:38.001399 containerd[1750]: time="2025-01-30T13:26:38.001279256Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.074315798s" Jan 30 13:26:38.012597 kubelet[3019]: W0130 13:26:38.012447 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:38.012597 kubelet[3019]: E0130 13:26:38.012577 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:38.207872 containerd[1750]: time="2025-01-30T13:26:38.207466286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:26:38.208469 containerd[1750]: time="2025-01-30T13:26:38.208248805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:26:38.208469 containerd[1750]: time="2025-01-30T13:26:38.208287685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:26:38.209638 containerd[1750]: time="2025-01-30T13:26:38.209061085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:26:38.210728 containerd[1750]: time="2025-01-30T13:26:38.210617204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:26:38.210894 containerd[1750]: time="2025-01-30T13:26:38.210701164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:26:38.211428 containerd[1750]: time="2025-01-30T13:26:38.210871084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:26:38.211571 containerd[1750]: time="2025-01-30T13:26:38.211529123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:26:38.214028 containerd[1750]: time="2025-01-30T13:26:38.213948762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:26:38.214244 containerd[1750]: time="2025-01-30T13:26:38.214100761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:26:38.214244 containerd[1750]: time="2025-01-30T13:26:38.214129241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:26:38.215177 containerd[1750]: time="2025-01-30T13:26:38.215027801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:26:38.235108 systemd[1]: Started cri-containerd-d2f913287a1c26d306a883dd2e0905f40f6f4dd68e6d2ba37cffbefb94c97e7e.scope - libcontainer container d2f913287a1c26d306a883dd2e0905f40f6f4dd68e6d2ba37cffbefb94c97e7e. Jan 30 13:26:38.240881 systemd[1]: Started cri-containerd-99fa7bfbcbd56fd1d762dfb9060db6a0a4aaeaeceed0ad230cae7acdb38b909f.scope - libcontainer container 99fa7bfbcbd56fd1d762dfb9060db6a0a4aaeaeceed0ad230cae7acdb38b909f. Jan 30 13:26:38.242979 systemd[1]: Started cri-containerd-9d64a575f3b7bda0d92eeba15834006187796da49bb97b30f659cfea0a444108.scope - libcontainer container 9d64a575f3b7bda0d92eeba15834006187796da49bb97b30f659cfea0a444108. Jan 30 13:26:38.294446 containerd[1750]: time="2025-01-30T13:26:38.294301711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-a27a4db638,Uid:29386032108190daf17029bd10554d49,Namespace:kube-system,Attempt:0,} returns sandbox id \"99fa7bfbcbd56fd1d762dfb9060db6a0a4aaeaeceed0ad230cae7acdb38b909f\"" Jan 30 13:26:38.295065 containerd[1750]: time="2025-01-30T13:26:38.295026270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-a27a4db638,Uid:958bb0433e55311747aef795dba066be,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2f913287a1c26d306a883dd2e0905f40f6f4dd68e6d2ba37cffbefb94c97e7e\"" Jan 30 13:26:38.301185 containerd[1750]: time="2025-01-30T13:26:38.301153546Z" level=info msg="CreateContainer within sandbox \"99fa7bfbcbd56fd1d762dfb9060db6a0a4aaeaeceed0ad230cae7acdb38b909f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:26:38.301736 containerd[1750]: time="2025-01-30T13:26:38.301428386Z" level=info msg="CreateContainer within sandbox \"d2f913287a1c26d306a883dd2e0905f40f6f4dd68e6d2ba37cffbefb94c97e7e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:26:38.306878 containerd[1750]: time="2025-01-30T13:26:38.306761583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-a27a4db638,Uid:b8c60225bcc7304139b6d885b79f10f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d64a575f3b7bda0d92eeba15834006187796da49bb97b30f659cfea0a444108\"" Jan 30 13:26:38.310211 containerd[1750]: time="2025-01-30T13:26:38.310178381Z" level=info msg="CreateContainer within sandbox \"9d64a575f3b7bda0d92eeba15834006187796da49bb97b30f659cfea0a444108\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:26:38.390379 containerd[1750]: time="2025-01-30T13:26:38.390330890Z" level=info msg="CreateContainer within sandbox \"d2f913287a1c26d306a883dd2e0905f40f6f4dd68e6d2ba37cffbefb94c97e7e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9275dccefa585fe195a61b918abf8d4df6b62b687fd3e3e997b4c5a38d7f87be\"" Jan 30 13:26:38.393931 containerd[1750]: time="2025-01-30T13:26:38.393882327Z" level=info msg="CreateContainer within sandbox \"99fa7bfbcbd56fd1d762dfb9060db6a0a4aaeaeceed0ad230cae7acdb38b909f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dd90e71995c3ac0d6aae1e719fe9a7362b0ca762b16dff0f9e082a88944f74fd\"" Jan 30 13:26:38.394217 containerd[1750]: time="2025-01-30T13:26:38.394157567Z" level=info msg="StartContainer for \"9275dccefa585fe195a61b918abf8d4df6b62b687fd3e3e997b4c5a38d7f87be\"" Jan 30 13:26:38.399548 containerd[1750]: time="2025-01-30T13:26:38.399449084Z" level=info msg="CreateContainer within sandbox \"9d64a575f3b7bda0d92eeba15834006187796da49bb97b30f659cfea0a444108\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7c4eb57daceb129d78b4136359f45ffd5d15c0a59c54f648dc832f5a63d7faee\"" Jan 30 13:26:38.399811 containerd[1750]: time="2025-01-30T13:26:38.399737084Z" level=info msg="StartContainer for \"dd90e71995c3ac0d6aae1e719fe9a7362b0ca762b16dff0f9e082a88944f74fd\"" Jan 30 13:26:38.406882 containerd[1750]: time="2025-01-30T13:26:38.405733160Z" level=info msg="StartContainer for \"7c4eb57daceb129d78b4136359f45ffd5d15c0a59c54f648dc832f5a63d7faee\"" Jan 30 13:26:38.419369 systemd[1]: Started cri-containerd-9275dccefa585fe195a61b918abf8d4df6b62b687fd3e3e997b4c5a38d7f87be.scope - libcontainer container 9275dccefa585fe195a61b918abf8d4df6b62b687fd3e3e997b4c5a38d7f87be. Jan 30 13:26:38.431326 kubelet[3019]: W0130 13:26:38.431178 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:38.431326 kubelet[3019]: E0130 13:26:38.431247 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jan 30 13:26:38.447110 systemd[1]: Started cri-containerd-dd90e71995c3ac0d6aae1e719fe9a7362b0ca762b16dff0f9e082a88944f74fd.scope - libcontainer container dd90e71995c3ac0d6aae1e719fe9a7362b0ca762b16dff0f9e082a88944f74fd. Jan 30 13:26:38.451779 systemd[1]: Started cri-containerd-7c4eb57daceb129d78b4136359f45ffd5d15c0a59c54f648dc832f5a63d7faee.scope - libcontainer container 7c4eb57daceb129d78b4136359f45ffd5d15c0a59c54f648dc832f5a63d7faee. Jan 30 13:26:38.486389 containerd[1750]: time="2025-01-30T13:26:38.486284189Z" level=info msg="StartContainer for \"9275dccefa585fe195a61b918abf8d4df6b62b687fd3e3e997b4c5a38d7f87be\" returns successfully" Jan 30 13:26:38.508965 containerd[1750]: time="2025-01-30T13:26:38.508814135Z" level=info msg="StartContainer for \"dd90e71995c3ac0d6aae1e719fe9a7362b0ca762b16dff0f9e082a88944f74fd\" returns successfully" Jan 30 13:26:38.518490 containerd[1750]: time="2025-01-30T13:26:38.518434248Z" level=info msg="StartContainer for \"7c4eb57daceb129d78b4136359f45ffd5d15c0a59c54f648dc832f5a63d7faee\" returns successfully" Jan 30 13:26:40.184140 kubelet[3019]: E0130 13:26:40.183875 3019 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186.1.0-a-a27a4db638.181f7b56dd7ba1d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-a27a4db638,UID:ci-4186.1.0-a-a27a4db638,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-a27a4db638,},FirstTimestamp:2025-01-30 13:26:34.201571795 +0000 UTC m=+3.281287923,LastTimestamp:2025-01-30 13:26:34.201571795 +0000 UTC m=+3.281287923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-a27a4db638,}" Jan 30 13:26:40.275472 kubelet[3019]: E0130 13:26:40.275366 3019 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186.1.0-a-a27a4db638.181f7b56de6379d5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-a27a4db638,UID:ci-4186.1.0-a-a27a4db638,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-a27a4db638,},FirstTimestamp:2025-01-30 13:26:34.216765909 +0000 UTC m=+3.296482037,LastTimestamp:2025-01-30 13:26:34.216765909 +0000 UTC m=+3.296482037,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-a27a4db638,}" Jan 30 13:26:40.482270 kubelet[3019]: E0130 13:26:40.482145 3019 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.0-a-a27a4db638\" not found" node="ci-4186.1.0-a-a27a4db638" Jan 30 13:26:40.529319 kubelet[3019]: I0130 13:26:40.529277 3019 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-a27a4db638" Jan 30 13:26:40.628044 kubelet[3019]: I0130 13:26:40.627907 3019 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.0-a-a27a4db638" Jan 30 13:26:40.675343 kubelet[3019]: E0130 13:26:40.675301 3019 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-a27a4db638\" not found" Jan 30 13:26:40.776259 kubelet[3019]: E0130 13:26:40.776215 3019 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-a27a4db638\" not found" Jan 30 13:26:40.876790 kubelet[3019]: E0130 13:26:40.876750 3019 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-a27a4db638\" not found" Jan 30 13:26:40.977389 kubelet[3019]: E0130 13:26:40.977345 3019 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-a27a4db638\" not found" Jan 30 13:26:41.077564 kubelet[3019]: E0130 13:26:41.077444 3019 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-a27a4db638\" not found" Jan 30 13:26:41.177587 kubelet[3019]: E0130 13:26:41.177530 3019 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-a27a4db638\" not found" Jan 30 13:26:41.278222 kubelet[3019]: E0130 13:26:41.278183 3019 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-a27a4db638\" not found" Jan 30 13:26:41.378689 kubelet[3019]: E0130 13:26:41.378578 3019 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-a27a4db638\" not found" Jan 30 13:26:41.479467 kubelet[3019]: E0130 13:26:41.479275 3019 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-a27a4db638\" not found" Jan 30 13:26:41.579848 kubelet[3019]: E0130 13:26:41.579806 3019 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-a27a4db638\" not found" Jan 30 13:26:41.680307 kubelet[3019]: E0130 13:26:41.680190 3019 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-a27a4db638\" not found" Jan 30 13:26:41.781082 kubelet[3019]: E0130 13:26:41.781041 3019 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-a27a4db638\" not found" Jan 30 13:26:41.881700 kubelet[3019]: E0130 13:26:41.881663 3019 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-a27a4db638\" not found" Jan 30 13:26:41.982357 kubelet[3019]: E0130 13:26:41.982250 3019 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-a27a4db638\" not found" Jan 30 13:26:42.082928 kubelet[3019]: E0130 13:26:42.082873 3019 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-a27a4db638\" not found" Jan 30 13:26:42.203834 kubelet[3019]: I0130 13:26:42.203793 3019 apiserver.go:52] "Watching apiserver" Jan 30 13:26:42.215443 kubelet[3019]: I0130 13:26:42.215398 3019 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:26:43.339778 systemd[1]: Reloading requested from client PID 3295 ('systemctl') (unit session-9.scope)... Jan 30 13:26:43.340112 systemd[1]: Reloading... Jan 30 13:26:43.434015 zram_generator::config[3335]: No configuration found. Jan 30 13:26:43.540518 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:26:43.633317 systemd[1]: Reloading finished in 292 ms. Jan 30 13:26:43.668518 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:26:43.679350 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:26:43.679570 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:26:43.679629 systemd[1]: kubelet.service: Consumed 1.305s CPU time, 113.3M memory peak, 0B memory swap peak. Jan 30 13:26:43.686168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:26:43.780842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:26:43.792661 (kubelet)[3399]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:26:43.837606 kubelet[3399]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:26:43.837606 kubelet[3399]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:26:43.837606 kubelet[3399]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:26:43.838842 kubelet[3399]: I0130 13:26:43.837644 3399 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:26:43.842189 kubelet[3399]: I0130 13:26:43.842155 3399 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:26:43.842189 kubelet[3399]: I0130 13:26:43.842180 3399 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:26:43.843885 kubelet[3399]: I0130 13:26:43.842389 3399 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:26:43.844004 kubelet[3399]: I0130 13:26:43.843904 3399 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:26:43.845536 kubelet[3399]: I0130 13:26:43.845472 3399 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:26:43.851948 kubelet[3399]: I0130 13:26:43.851837 3399 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:26:43.852403 kubelet[3399]: I0130 13:26:43.852364 3399 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:26:43.852944 kubelet[3399]: I0130 13:26:43.852472 3399 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-a27a4db638","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:26:43.852944 kubelet[3399]: I0130 13:26:43.852664 3399 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:26:43.852944 kubelet[3399]: I0130 13:26:43.852673 3399 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:26:43.852944 kubelet[3399]: I0130 13:26:43.852707 3399 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:26:43.852944 kubelet[3399]: I0130 13:26:43.852825 3399 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:26:43.853149 kubelet[3399]: I0130 13:26:43.852840 3399 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:26:43.853149 kubelet[3399]: I0130 13:26:43.852868 3399 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:26:43.853149 kubelet[3399]: I0130 13:26:43.852891 3399 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:26:43.863530 kubelet[3399]: I0130 13:26:43.863495 3399 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:26:43.863705 kubelet[3399]: I0130 13:26:43.863686 3399 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:26:43.865011 kubelet[3399]: I0130 13:26:43.864987 3399 server.go:1264] "Started kubelet" Jan 30 13:26:43.865665 kubelet[3399]: I0130 13:26:43.865612 3399 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:26:43.871525 kubelet[3399]: I0130 13:26:43.871498 3399 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:26:43.874353 kubelet[3399]: I0130 13:26:43.871818 3399 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:26:43.874619 kubelet[3399]: I0130 13:26:43.874579 3399 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:26:43.876554 kubelet[3399]: I0130 13:26:43.876036 3399 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:26:43.885487 kubelet[3399]: I0130 13:26:43.885368 3399 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:26:43.892262 kubelet[3399]: I0130 13:26:43.891090 3399 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:26:43.892262 kubelet[3399]: I0130 13:26:43.892205 3399 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:26:43.892262 kubelet[3399]: I0130 13:26:43.892244 3399 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:26:43.892262 kubelet[3399]: I0130 13:26:43.892264 3399 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:26:43.892506 kubelet[3399]: E0130 13:26:43.892308 3399 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:26:43.893455 kubelet[3399]: I0130 13:26:43.893439 3399 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:26:43.895520 kubelet[3399]: I0130 13:26:43.895013 3399 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:26:43.900802 kubelet[3399]: I0130 13:26:43.900767 3399 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:26:43.901092 kubelet[3399]: I0130 13:26:43.901070 3399 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:26:43.905231 kubelet[3399]: E0130 13:26:43.905186 3399 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:26:43.905406 kubelet[3399]: I0130 13:26:43.905390 3399 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:26:43.957077 kubelet[3399]: I0130 13:26:43.957046 3399 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:26:43.957215 kubelet[3399]: I0130 13:26:43.957203 3399 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:26:43.957364 kubelet[3399]: I0130 13:26:43.957301 3399 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:26:43.957637 kubelet[3399]: I0130 13:26:43.957531 3399 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:26:43.957637 kubelet[3399]: I0130 13:26:43.957545 3399 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:26:43.957637 kubelet[3399]: I0130 13:26:43.957564 3399 policy_none.go:49] "None policy: Start" Jan 30 13:26:43.958452 kubelet[3399]: I0130 13:26:43.958437 3399 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:26:43.958980 kubelet[3399]: I0130 13:26:43.958563 3399 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:26:43.958980 kubelet[3399]: I0130 13:26:43.958715 3399 state_mem.go:75] "Updated machine memory state" Jan 30 13:26:43.968613 kubelet[3399]: I0130 13:26:43.968579 3399 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:26:43.968815 kubelet[3399]: I0130 13:26:43.968765 3399 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:26:43.968896 kubelet[3399]: I0130 13:26:43.968879 3399 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:26:43.991365 kubelet[3399]: I0130 13:26:43.991329 3399 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-a27a4db638" Jan 30 13:26:43.994108 kubelet[3399]: I0130 13:26:43.992882 3399 topology_manager.go:215] "Topology Admit Handler" podUID="29386032108190daf17029bd10554d49" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:43.994108 kubelet[3399]: I0130 13:26:43.993005 3399 topology_manager.go:215] "Topology Admit Handler" podUID="958bb0433e55311747aef795dba066be" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:43.994108 kubelet[3399]: I0130 13:26:43.993041 3399 topology_manager.go:215] "Topology Admit Handler" podUID="b8c60225bcc7304139b6d885b79f10f3" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:44.007650 kubelet[3399]: I0130 13:26:44.006755 3399 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186.1.0-a-a27a4db638" Jan 30 13:26:44.007650 kubelet[3399]: I0130 13:26:44.006851 3399 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.0-a-a27a4db638" Jan 30 13:26:44.010900 kubelet[3399]: W0130 13:26:44.010843 3399 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:26:44.013901 kubelet[3399]: W0130 13:26:44.013862 3399 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:26:44.016134 kubelet[3399]: W0130 13:26:44.016100 3399 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:26:44.099109 kubelet[3399]: I0130 13:26:44.099071 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/29386032108190daf17029bd10554d49-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-a27a4db638\" (UID: \"29386032108190daf17029bd10554d49\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:44.099109 kubelet[3399]: I0130 13:26:44.099109 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29386032108190daf17029bd10554d49-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-a27a4db638\" (UID: \"29386032108190daf17029bd10554d49\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:44.099109 kubelet[3399]: I0130 13:26:44.099131 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8c60225bcc7304139b6d885b79f10f3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-a27a4db638\" (UID: \"b8c60225bcc7304139b6d885b79f10f3\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:44.099109 kubelet[3399]: I0130 13:26:44.099151 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29386032108190daf17029bd10554d49-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-a27a4db638\" (UID: \"29386032108190daf17029bd10554d49\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:44.099109 kubelet[3399]: I0130 13:26:44.099171 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29386032108190daf17029bd10554d49-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-a27a4db638\" (UID: \"29386032108190daf17029bd10554d49\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:44.099760 kubelet[3399]: I0130 13:26:44.099188 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29386032108190daf17029bd10554d49-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-a27a4db638\" (UID: \"29386032108190daf17029bd10554d49\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:44.099760 kubelet[3399]: I0130 13:26:44.099207 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/958bb0433e55311747aef795dba066be-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-a27a4db638\" (UID: \"958bb0433e55311747aef795dba066be\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:44.099760 kubelet[3399]: I0130 13:26:44.099230 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8c60225bcc7304139b6d885b79f10f3-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-a27a4db638\" (UID: \"b8c60225bcc7304139b6d885b79f10f3\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:44.099760 kubelet[3399]: I0130 13:26:44.099283 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8c60225bcc7304139b6d885b79f10f3-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-a27a4db638\" (UID: \"b8c60225bcc7304139b6d885b79f10f3\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:44.341774 sudo[3432]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 13:26:44.342081 sudo[3432]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 13:26:44.807313 sudo[3432]: pam_unix(sudo:session): session closed for user root Jan 30 13:26:44.855056 kubelet[3399]: I0130 13:26:44.855014 3399 apiserver.go:52] "Watching apiserver" Jan 30 13:26:44.886253 kubelet[3399]: I0130 13:26:44.886175 3399 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:26:44.950168 kubelet[3399]: W0130 13:26:44.950092 3399 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:26:44.950653 kubelet[3399]: E0130 13:26:44.950397 3399 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-a-a27a4db638\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.0-a-a27a4db638" Jan 30 13:26:44.979017 kubelet[3399]: I0130 13:26:44.978473 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.0-a-a27a4db638" podStartSLOduration=0.978451479 podStartE2EDuration="978.451479ms" podCreationTimestamp="2025-01-30 13:26:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:26:44.967369484 +0000 UTC m=+1.171559755" watchObservedRunningTime="2025-01-30 13:26:44.978451479 +0000 UTC m=+1.182641750" Jan 30 13:26:44.998280 kubelet[3399]: I0130 13:26:44.997792 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-a27a4db638" podStartSLOduration=0.997774192 podStartE2EDuration="997.774192ms" podCreationTimestamp="2025-01-30 13:26:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:26:44.978876639 +0000 UTC m=+1.183066870" watchObservedRunningTime="2025-01-30 13:26:44.997774192 +0000 UTC m=+1.201964463" Jan 30 13:26:44.998630 kubelet[3399]: I0130 13:26:44.998558 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.0-a-a27a4db638" podStartSLOduration=0.998534152 podStartE2EDuration="998.534152ms" podCreationTimestamp="2025-01-30 13:26:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:26:44.996472872 +0000 UTC m=+1.200663143" watchObservedRunningTime="2025-01-30 13:26:44.998534152 +0000 UTC m=+1.202724423" Jan 30 13:26:46.966663 sudo[2395]: pam_unix(sudo:session): session closed for user root Jan 30 13:26:47.040785 sshd[2394]: Connection closed by 10.200.16.10 port 39932 Jan 30 13:26:47.041352 sshd-session[2392]: pam_unix(sshd:session): session closed for user core Jan 30 13:26:47.044330 systemd[1]: sshd@6-10.200.20.21:22-10.200.16.10:39932.service: Deactivated successfully. Jan 30 13:26:47.047125 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:26:47.047368 systemd[1]: session-9.scope: Consumed 7.411s CPU time, 190.3M memory peak, 0B memory swap peak. Jan 30 13:26:47.048084 systemd-logind[1690]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:26:47.049254 systemd-logind[1690]: Removed session 9. Jan 30 13:26:57.382481 kubelet[3399]: I0130 13:26:57.382422 3399 topology_manager.go:215] "Topology Admit Handler" podUID="7824041e-eb26-4724-b6a6-3833f16f5fb4" podNamespace="kube-system" podName="kube-proxy-qvrpp" Jan 30 13:26:57.392421 systemd[1]: Created slice kubepods-besteffort-pod7824041e_eb26_4724_b6a6_3833f16f5fb4.slice - libcontainer container kubepods-besteffort-pod7824041e_eb26_4724_b6a6_3833f16f5fb4.slice. Jan 30 13:26:57.396163 kubelet[3399]: W0130 13:26:57.396002 3399 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4186.1.0-a-a27a4db638" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.0-a-a27a4db638' and this object Jan 30 13:26:57.396163 kubelet[3399]: E0130 13:26:57.396045 3399 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4186.1.0-a-a27a4db638" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.0-a-a27a4db638' and this object Jan 30 13:26:57.396163 kubelet[3399]: W0130 13:26:57.396102 3399 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4186.1.0-a-a27a4db638" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.0-a-a27a4db638' and this object Jan 30 13:26:57.396163 kubelet[3399]: E0130 13:26:57.396113 3399 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4186.1.0-a-a27a4db638" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.0-a-a27a4db638' and this object Jan 30 13:26:57.398054 kubelet[3399]: I0130 13:26:57.398013 3399 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:26:57.398481 containerd[1750]: time="2025-01-30T13:26:57.398413545Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:26:57.402090 kubelet[3399]: I0130 13:26:57.401187 3399 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:26:57.422689 kubelet[3399]: I0130 13:26:57.422628 3399 topology_manager.go:215] "Topology Admit Handler" podUID="d50ee0ad-ade5-4aeb-b30a-6847f68f108a" podNamespace="kube-system" podName="cilium-dw29w" Jan 30 13:26:57.434000 systemd[1]: Created slice kubepods-burstable-podd50ee0ad_ade5_4aeb_b30a_6847f68f108a.slice - libcontainer container kubepods-burstable-podd50ee0ad_ade5_4aeb_b30a_6847f68f108a.slice. Jan 30 13:26:57.482953 kubelet[3399]: I0130 13:26:57.482445 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-hubble-tls\") pod \"cilium-dw29w\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " pod="kube-system/cilium-dw29w" Jan 30 13:26:57.482953 kubelet[3399]: I0130 13:26:57.482489 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k26mz\" (UniqueName: \"kubernetes.io/projected/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-kube-api-access-k26mz\") pod \"cilium-dw29w\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " pod="kube-system/cilium-dw29w" Jan 30 13:26:57.482953 kubelet[3399]: I0130 13:26:57.482512 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-cilium-run\") pod \"cilium-dw29w\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " pod="kube-system/cilium-dw29w" Jan 30 13:26:57.482953 kubelet[3399]: I0130 13:26:57.482528 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-hostproc\") pod \"cilium-dw29w\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " pod="kube-system/cilium-dw29w" Jan 30 13:26:57.482953 kubelet[3399]: I0130 13:26:57.482545 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-host-proc-sys-net\") pod \"cilium-dw29w\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " pod="kube-system/cilium-dw29w" Jan 30 13:26:57.482953 kubelet[3399]: I0130 13:26:57.482560 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7824041e-eb26-4724-b6a6-3833f16f5fb4-lib-modules\") pod \"kube-proxy-qvrpp\" (UID: \"7824041e-eb26-4724-b6a6-3833f16f5fb4\") " pod="kube-system/kube-proxy-qvrpp" Jan 30 13:26:57.483223 kubelet[3399]: I0130 13:26:57.482576 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-clustermesh-secrets\") pod \"cilium-dw29w\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " pod="kube-system/cilium-dw29w" Jan 30 13:26:57.483223 kubelet[3399]: I0130 13:26:57.482592 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-cilium-config-path\") pod \"cilium-dw29w\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " pod="kube-system/cilium-dw29w" Jan 30 13:26:57.483223 kubelet[3399]: I0130 13:26:57.482609 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7824041e-eb26-4724-b6a6-3833f16f5fb4-xtables-lock\") pod \"kube-proxy-qvrpp\" (UID: \"7824041e-eb26-4724-b6a6-3833f16f5fb4\") " pod="kube-system/kube-proxy-qvrpp" Jan 30 13:26:57.483223 kubelet[3399]: I0130 13:26:57.482625 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-cni-path\") pod \"cilium-dw29w\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " pod="kube-system/cilium-dw29w" Jan 30 13:26:57.483223 kubelet[3399]: I0130 13:26:57.482640 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2sb7\" (UniqueName: \"kubernetes.io/projected/7824041e-eb26-4724-b6a6-3833f16f5fb4-kube-api-access-p2sb7\") pod \"kube-proxy-qvrpp\" (UID: \"7824041e-eb26-4724-b6a6-3833f16f5fb4\") " pod="kube-system/kube-proxy-qvrpp" Jan 30 13:26:57.483335 kubelet[3399]: I0130 13:26:57.482655 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-lib-modules\") pod \"cilium-dw29w\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " pod="kube-system/cilium-dw29w" Jan 30 13:26:57.483335 kubelet[3399]: I0130 13:26:57.482678 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7824041e-eb26-4724-b6a6-3833f16f5fb4-kube-proxy\") pod \"kube-proxy-qvrpp\" (UID: \"7824041e-eb26-4724-b6a6-3833f16f5fb4\") " pod="kube-system/kube-proxy-qvrpp" Jan 30 13:26:57.483335 kubelet[3399]: I0130 13:26:57.482698 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-cilium-cgroup\") pod \"cilium-dw29w\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " pod="kube-system/cilium-dw29w" Jan 30 13:26:57.483335 kubelet[3399]: I0130 13:26:57.482717 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-bpf-maps\") pod \"cilium-dw29w\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " pod="kube-system/cilium-dw29w" Jan 30 13:26:57.483335 kubelet[3399]: I0130 13:26:57.482733 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-etc-cni-netd\") pod \"cilium-dw29w\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " pod="kube-system/cilium-dw29w" Jan 30 13:26:57.483335 kubelet[3399]: I0130 13:26:57.482758 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-xtables-lock\") pod \"cilium-dw29w\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " pod="kube-system/cilium-dw29w" Jan 30 13:26:57.483464 kubelet[3399]: I0130 13:26:57.482774 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-host-proc-sys-kernel\") pod \"cilium-dw29w\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " pod="kube-system/cilium-dw29w" Jan 30 13:26:58.436186 kubelet[3399]: I0130 13:26:58.434851 3399 topology_manager.go:215] "Topology Admit Handler" podUID="48105e32-c92e-40a6-b065-786ce3a90f69" podNamespace="kube-system" podName="cilium-operator-599987898-8vfgf" Jan 30 13:26:58.446207 systemd[1]: Created slice kubepods-besteffort-pod48105e32_c92e_40a6_b065_786ce3a90f69.slice - libcontainer container kubepods-besteffort-pod48105e32_c92e_40a6_b065_786ce3a90f69.slice. Jan 30 13:26:58.490128 kubelet[3399]: I0130 13:26:58.490079 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48105e32-c92e-40a6-b065-786ce3a90f69-cilium-config-path\") pod \"cilium-operator-599987898-8vfgf\" (UID: \"48105e32-c92e-40a6-b065-786ce3a90f69\") " pod="kube-system/cilium-operator-599987898-8vfgf" Jan 30 13:26:58.490128 kubelet[3399]: I0130 13:26:58.490126 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv2n7\" (UniqueName: \"kubernetes.io/projected/48105e32-c92e-40a6-b065-786ce3a90f69-kube-api-access-xv2n7\") pod \"cilium-operator-599987898-8vfgf\" (UID: \"48105e32-c92e-40a6-b065-786ce3a90f69\") " pod="kube-system/cilium-operator-599987898-8vfgf" Jan 30 13:26:58.602813 kubelet[3399]: E0130 13:26:58.602767 3399 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:26:58.603010 kubelet[3399]: E0130 13:26:58.602839 3399 projected.go:200] Error preparing data for projected volume kube-api-access-k26mz for pod kube-system/cilium-dw29w: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:26:58.603010 kubelet[3399]: E0130 13:26:58.602933 3399 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-kube-api-access-k26mz podName:d50ee0ad-ade5-4aeb-b30a-6847f68f108a nodeName:}" failed. No retries permitted until 2025-01-30 13:26:59.102892012 +0000 UTC m=+15.307082283 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k26mz" (UniqueName: "kubernetes.io/projected/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-kube-api-access-k26mz") pod "cilium-dw29w" (UID: "d50ee0ad-ade5-4aeb-b30a-6847f68f108a") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:26:58.609109 kubelet[3399]: E0130 13:26:58.608797 3399 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:26:58.609109 kubelet[3399]: E0130 13:26:58.608835 3399 projected.go:200] Error preparing data for projected volume kube-api-access-p2sb7 for pod kube-system/kube-proxy-qvrpp: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:26:58.609109 kubelet[3399]: E0130 13:26:58.608888 3399 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7824041e-eb26-4724-b6a6-3833f16f5fb4-kube-api-access-p2sb7 podName:7824041e-eb26-4724-b6a6-3833f16f5fb4 nodeName:}" failed. No retries permitted until 2025-01-30 13:26:59.10887077 +0000 UTC m=+15.313061041 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p2sb7" (UniqueName: "kubernetes.io/projected/7824041e-eb26-4724-b6a6-3833f16f5fb4-kube-api-access-p2sb7") pod "kube-proxy-qvrpp" (UID: "7824041e-eb26-4724-b6a6-3833f16f5fb4") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:26:59.052473 containerd[1750]: time="2025-01-30T13:26:59.052348028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-8vfgf,Uid:48105e32-c92e-40a6-b065-786ce3a90f69,Namespace:kube-system,Attempt:0,}" Jan 30 13:26:59.099860 containerd[1750]: time="2025-01-30T13:26:59.097302290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:26:59.099860 containerd[1750]: time="2025-01-30T13:26:59.097358410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:26:59.099860 containerd[1750]: time="2025-01-30T13:26:59.097799489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:26:59.099860 containerd[1750]: time="2025-01-30T13:26:59.098031729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:26:59.119112 systemd[1]: Started cri-containerd-dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c.scope - libcontainer container dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c. Jan 30 13:26:59.146150 containerd[1750]: time="2025-01-30T13:26:59.146107270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-8vfgf,Uid:48105e32-c92e-40a6-b065-786ce3a90f69,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c\"" Jan 30 13:26:59.148546 containerd[1750]: time="2025-01-30T13:26:59.148501749Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:26:59.203332 containerd[1750]: time="2025-01-30T13:26:59.202893966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qvrpp,Uid:7824041e-eb26-4724-b6a6-3833f16f5fb4,Namespace:kube-system,Attempt:0,}" Jan 30 13:26:59.239658 containerd[1750]: time="2025-01-30T13:26:59.239622671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dw29w,Uid:d50ee0ad-ade5-4aeb-b30a-6847f68f108a,Namespace:kube-system,Attempt:0,}" Jan 30 13:26:59.253149 containerd[1750]: time="2025-01-30T13:26:59.252885986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:26:59.253149 containerd[1750]: time="2025-01-30T13:26:59.252958306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:26:59.253149 containerd[1750]: time="2025-01-30T13:26:59.252984146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:26:59.253149 containerd[1750]: time="2025-01-30T13:26:59.253057106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:26:59.269187 systemd[1]: Started cri-containerd-ada61cf60d13381fee1ccbacbf2900d6ea39b757520672f718caa7c92c3a2b55.scope - libcontainer container ada61cf60d13381fee1ccbacbf2900d6ea39b757520672f718caa7c92c3a2b55. Jan 30 13:26:59.292398 containerd[1750]: time="2025-01-30T13:26:59.292177770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:26:59.292863 containerd[1750]: time="2025-01-30T13:26:59.292831690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:26:59.292997 containerd[1750]: time="2025-01-30T13:26:59.292974650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:26:59.293478 containerd[1750]: time="2025-01-30T13:26:59.293418609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:26:59.294988 containerd[1750]: time="2025-01-30T13:26:59.294955649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qvrpp,Uid:7824041e-eb26-4724-b6a6-3833f16f5fb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ada61cf60d13381fee1ccbacbf2900d6ea39b757520672f718caa7c92c3a2b55\"" Jan 30 13:26:59.301013 containerd[1750]: time="2025-01-30T13:26:59.300953886Z" level=info msg="CreateContainer within sandbox \"ada61cf60d13381fee1ccbacbf2900d6ea39b757520672f718caa7c92c3a2b55\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:26:59.317124 systemd[1]: Started cri-containerd-649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564.scope - libcontainer container 649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564. Jan 30 13:26:59.343879 containerd[1750]: time="2025-01-30T13:26:59.343833549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dw29w,Uid:d50ee0ad-ade5-4aeb-b30a-6847f68f108a,Namespace:kube-system,Attempt:0,} returns sandbox id \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\"" Jan 30 13:26:59.350673 containerd[1750]: time="2025-01-30T13:26:59.350624306Z" level=info msg="CreateContainer within sandbox \"ada61cf60d13381fee1ccbacbf2900d6ea39b757520672f718caa7c92c3a2b55\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e7442c58b9597c8b9cf3b73c081057c4db030c8c3cfd7f814cca09fdd10905e9\"" Jan 30 13:26:59.351311 containerd[1750]: time="2025-01-30T13:26:59.351282346Z" level=info msg="StartContainer for \"e7442c58b9597c8b9cf3b73c081057c4db030c8c3cfd7f814cca09fdd10905e9\"" Jan 30 13:26:59.375119 systemd[1]: Started cri-containerd-e7442c58b9597c8b9cf3b73c081057c4db030c8c3cfd7f814cca09fdd10905e9.scope - libcontainer container e7442c58b9597c8b9cf3b73c081057c4db030c8c3cfd7f814cca09fdd10905e9. Jan 30 13:26:59.404662 containerd[1750]: time="2025-01-30T13:26:59.404597244Z" level=info msg="StartContainer for \"e7442c58b9597c8b9cf3b73c081057c4db030c8c3cfd7f814cca09fdd10905e9\" returns successfully" Jan 30 13:27:00.476311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount633435780.mount: Deactivated successfully. Jan 30 13:27:01.157244 containerd[1750]: time="2025-01-30T13:27:01.157187686Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:27:01.162020 containerd[1750]: time="2025-01-30T13:27:01.161956125Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 13:27:01.164719 containerd[1750]: time="2025-01-30T13:27:01.164631404Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:27:01.166874 containerd[1750]: time="2025-01-30T13:27:01.166834644Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.018295575s" Jan 30 13:27:01.166943 containerd[1750]: time="2025-01-30T13:27:01.166879804Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 13:27:01.168197 containerd[1750]: time="2025-01-30T13:27:01.167955523Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:27:01.169717 containerd[1750]: time="2025-01-30T13:27:01.169683283Z" level=info msg="CreateContainer within sandbox \"dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:27:01.221497 containerd[1750]: time="2025-01-30T13:27:01.221419669Z" level=info msg="CreateContainer within sandbox \"dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4\"" Jan 30 13:27:01.222323 containerd[1750]: time="2025-01-30T13:27:01.222197229Z" level=info msg="StartContainer for \"4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4\"" Jan 30 13:27:01.252130 systemd[1]: Started cri-containerd-4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4.scope - libcontainer container 4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4. Jan 30 13:27:01.282952 containerd[1750]: time="2025-01-30T13:27:01.282370652Z" level=info msg="StartContainer for \"4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4\" returns successfully" Jan 30 13:27:01.992057 kubelet[3399]: I0130 13:27:01.991978 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qvrpp" podStartSLOduration=4.9919589 podStartE2EDuration="4.9919589s" podCreationTimestamp="2025-01-30 13:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:26:59.974615167 +0000 UTC m=+16.178805438" watchObservedRunningTime="2025-01-30 13:27:01.9919589 +0000 UTC m=+18.196149171" Jan 30 13:27:01.992801 kubelet[3399]: I0130 13:27:01.992462 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-8vfgf" podStartSLOduration=1.972260965 podStartE2EDuration="3.992451619s" podCreationTimestamp="2025-01-30 13:26:58 +0000 UTC" firstStartedPulling="2025-01-30 13:26:59.147528349 +0000 UTC m=+15.351718580" lastFinishedPulling="2025-01-30 13:27:01.167718963 +0000 UTC m=+17.371909234" observedRunningTime="2025-01-30 13:27:01.99098482 +0000 UTC m=+18.195175091" watchObservedRunningTime="2025-01-30 13:27:01.992451619 +0000 UTC m=+18.196641850" Jan 30 13:27:05.637570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2381750370.mount: Deactivated successfully. Jan 30 13:27:08.200757 containerd[1750]: time="2025-01-30T13:27:08.200689184Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:27:08.203110 containerd[1750]: time="2025-01-30T13:27:08.203042864Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 13:27:08.208047 containerd[1750]: time="2025-01-30T13:27:08.207975022Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:27:08.210119 containerd[1750]: time="2025-01-30T13:27:08.209972982Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.041975699s" Jan 30 13:27:08.210119 containerd[1750]: time="2025-01-30T13:27:08.210016902Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 13:27:08.213760 containerd[1750]: time="2025-01-30T13:27:08.213682861Z" level=info msg="CreateContainer within sandbox \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:27:08.260719 containerd[1750]: time="2025-01-30T13:27:08.260642409Z" level=info msg="CreateContainer within sandbox \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da\"" Jan 30 13:27:08.262249 containerd[1750]: time="2025-01-30T13:27:08.261373409Z" level=info msg="StartContainer for \"0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da\"" Jan 30 13:27:08.294145 systemd[1]: Started cri-containerd-0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da.scope - libcontainer container 0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da. Jan 30 13:27:08.321828 containerd[1750]: time="2025-01-30T13:27:08.321777194Z" level=info msg="StartContainer for \"0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da\" returns successfully" Jan 30 13:27:08.329781 systemd[1]: cri-containerd-0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da.scope: Deactivated successfully. Jan 30 13:27:09.234595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da-rootfs.mount: Deactivated successfully. Jan 30 13:27:09.429879 containerd[1750]: time="2025-01-30T13:27:09.429816156Z" level=info msg="shim disconnected" id=0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da namespace=k8s.io Jan 30 13:27:09.429879 containerd[1750]: time="2025-01-30T13:27:09.429875196Z" level=warning msg="cleaning up after shim disconnected" id=0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da namespace=k8s.io Jan 30 13:27:09.429879 containerd[1750]: time="2025-01-30T13:27:09.429883276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:27:09.995672 containerd[1750]: time="2025-01-30T13:27:09.995539255Z" level=info msg="CreateContainer within sandbox \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:27:10.028838 containerd[1750]: time="2025-01-30T13:27:10.028784806Z" level=info msg="CreateContainer within sandbox \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7\"" Jan 30 13:27:10.029412 containerd[1750]: time="2025-01-30T13:27:10.029389046Z" level=info msg="StartContainer for \"d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7\"" Jan 30 13:27:10.059210 systemd[1]: Started cri-containerd-d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7.scope - libcontainer container d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7. Jan 30 13:27:10.090607 containerd[1750]: time="2025-01-30T13:27:10.090544391Z" level=info msg="StartContainer for \"d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7\" returns successfully" Jan 30 13:27:10.100230 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:27:10.100438 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:27:10.100504 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:27:10.107308 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:27:10.107507 systemd[1]: cri-containerd-d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7.scope: Deactivated successfully. Jan 30 13:27:10.125349 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:27:10.145944 containerd[1750]: time="2025-01-30T13:27:10.145671937Z" level=info msg="shim disconnected" id=d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7 namespace=k8s.io Jan 30 13:27:10.145944 containerd[1750]: time="2025-01-30T13:27:10.145735897Z" level=warning msg="cleaning up after shim disconnected" id=d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7 namespace=k8s.io Jan 30 13:27:10.145944 containerd[1750]: time="2025-01-30T13:27:10.145744937Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:27:10.234458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7-rootfs.mount: Deactivated successfully. Jan 30 13:27:10.997975 containerd[1750]: time="2025-01-30T13:27:10.997759804Z" level=info msg="CreateContainer within sandbox \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:27:11.039732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount53601222.mount: Deactivated successfully. Jan 30 13:27:11.053273 containerd[1750]: time="2025-01-30T13:27:11.053227990Z" level=info msg="CreateContainer within sandbox \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b\"" Jan 30 13:27:11.055202 containerd[1750]: time="2025-01-30T13:27:11.054092710Z" level=info msg="StartContainer for \"94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b\"" Jan 30 13:27:11.084131 systemd[1]: Started cri-containerd-94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b.scope - libcontainer container 94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b. Jan 30 13:27:11.112555 systemd[1]: cri-containerd-94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b.scope: Deactivated successfully. Jan 30 13:27:11.118458 containerd[1750]: time="2025-01-30T13:27:11.118345653Z" level=info msg="StartContainer for \"94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b\" returns successfully" Jan 30 13:27:11.165578 containerd[1750]: time="2025-01-30T13:27:11.165508642Z" level=info msg="shim disconnected" id=94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b namespace=k8s.io Jan 30 13:27:11.165578 containerd[1750]: time="2025-01-30T13:27:11.165572562Z" level=warning msg="cleaning up after shim disconnected" id=94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b namespace=k8s.io Jan 30 13:27:11.165578 containerd[1750]: time="2025-01-30T13:27:11.165582082Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:27:11.234438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b-rootfs.mount: Deactivated successfully. Jan 30 13:27:12.003470 containerd[1750]: time="2025-01-30T13:27:12.003415032Z" level=info msg="CreateContainer within sandbox \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:27:12.051576 containerd[1750]: time="2025-01-30T13:27:12.051510060Z" level=info msg="CreateContainer within sandbox \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a\"" Jan 30 13:27:12.054221 containerd[1750]: time="2025-01-30T13:27:12.054168659Z" level=info msg="StartContainer for \"c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a\"" Jan 30 13:27:12.095152 systemd[1]: Started cri-containerd-c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a.scope - libcontainer container c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a. Jan 30 13:27:12.119488 systemd[1]: cri-containerd-c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a.scope: Deactivated successfully. Jan 30 13:27:12.123702 containerd[1750]: time="2025-01-30T13:27:12.123589922Z" level=info msg="StartContainer for \"c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a\" returns successfully" Jan 30 13:27:12.152854 containerd[1750]: time="2025-01-30T13:27:12.152746154Z" level=info msg="shim disconnected" id=c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a namespace=k8s.io Jan 30 13:27:12.152854 containerd[1750]: time="2025-01-30T13:27:12.152801834Z" level=warning msg="cleaning up after shim disconnected" id=c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a namespace=k8s.io Jan 30 13:27:12.152854 containerd[1750]: time="2025-01-30T13:27:12.152808994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:27:12.234403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a-rootfs.mount: Deactivated successfully. Jan 30 13:27:13.007747 containerd[1750]: time="2025-01-30T13:27:13.007467540Z" level=info msg="CreateContainer within sandbox \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:27:13.050263 containerd[1750]: time="2025-01-30T13:27:13.050212930Z" level=info msg="CreateContainer within sandbox \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59\"" Jan 30 13:27:13.051306 containerd[1750]: time="2025-01-30T13:27:13.051155609Z" level=info msg="StartContainer for \"b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59\"" Jan 30 13:27:13.084144 systemd[1]: Started cri-containerd-b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59.scope - libcontainer container b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59. Jan 30 13:27:13.114867 containerd[1750]: time="2025-01-30T13:27:13.114818033Z" level=info msg="StartContainer for \"b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59\" returns successfully" Jan 30 13:27:13.235734 systemd[1]: run-containerd-runc-k8s.io-b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59-runc.V1Pnhl.mount: Deactivated successfully. Jan 30 13:27:13.241001 kubelet[3399]: I0130 13:27:13.240715 3399 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:27:13.289774 kubelet[3399]: I0130 13:27:13.289700 3399 topology_manager.go:215] "Topology Admit Handler" podUID="73ae20f1-52c6-43fc-9ea6-0c5cf8ab611d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jdpj4" Jan 30 13:27:13.296728 kubelet[3399]: I0130 13:27:13.295524 3399 topology_manager.go:215] "Topology Admit Handler" podUID="ad815d85-f6d7-4e86-bdf5-efa1019b0076" podNamespace="kube-system" podName="coredns-7db6d8ff4d-25xwn" Jan 30 13:27:13.306759 systemd[1]: Created slice kubepods-burstable-pod73ae20f1_52c6_43fc_9ea6_0c5cf8ab611d.slice - libcontainer container kubepods-burstable-pod73ae20f1_52c6_43fc_9ea6_0c5cf8ab611d.slice. Jan 30 13:27:13.312344 systemd[1]: Created slice kubepods-burstable-podad815d85_f6d7_4e86_bdf5_efa1019b0076.slice - libcontainer container kubepods-burstable-podad815d85_f6d7_4e86_bdf5_efa1019b0076.slice. Jan 30 13:27:13.393973 kubelet[3399]: I0130 13:27:13.393876 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73ae20f1-52c6-43fc-9ea6-0c5cf8ab611d-config-volume\") pod \"coredns-7db6d8ff4d-jdpj4\" (UID: \"73ae20f1-52c6-43fc-9ea6-0c5cf8ab611d\") " pod="kube-system/coredns-7db6d8ff4d-jdpj4" Jan 30 13:27:13.394783 kubelet[3399]: I0130 13:27:13.394501 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m4f2\" (UniqueName: \"kubernetes.io/projected/73ae20f1-52c6-43fc-9ea6-0c5cf8ab611d-kube-api-access-4m4f2\") pod \"coredns-7db6d8ff4d-jdpj4\" (UID: \"73ae20f1-52c6-43fc-9ea6-0c5cf8ab611d\") " pod="kube-system/coredns-7db6d8ff4d-jdpj4" Jan 30 13:27:13.394783 kubelet[3399]: I0130 13:27:13.394529 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t8lm\" (UniqueName: \"kubernetes.io/projected/ad815d85-f6d7-4e86-bdf5-efa1019b0076-kube-api-access-6t8lm\") pod \"coredns-7db6d8ff4d-25xwn\" (UID: \"ad815d85-f6d7-4e86-bdf5-efa1019b0076\") " pod="kube-system/coredns-7db6d8ff4d-25xwn" Jan 30 13:27:13.394783 kubelet[3399]: I0130 13:27:13.394695 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad815d85-f6d7-4e86-bdf5-efa1019b0076-config-volume\") pod \"coredns-7db6d8ff4d-25xwn\" (UID: \"ad815d85-f6d7-4e86-bdf5-efa1019b0076\") " pod="kube-system/coredns-7db6d8ff4d-25xwn" Jan 30 13:27:13.621346 containerd[1750]: time="2025-01-30T13:27:13.620810787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jdpj4,Uid:73ae20f1-52c6-43fc-9ea6-0c5cf8ab611d,Namespace:kube-system,Attempt:0,}" Jan 30 13:27:13.624107 containerd[1750]: time="2025-01-30T13:27:13.623524466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-25xwn,Uid:ad815d85-f6d7-4e86-bdf5-efa1019b0076,Namespace:kube-system,Attempt:0,}" Jan 30 13:27:14.027760 kubelet[3399]: I0130 13:27:14.027645 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dw29w" podStartSLOduration=8.161357291 podStartE2EDuration="17.027627565s" podCreationTimestamp="2025-01-30 13:26:57 +0000 UTC" firstStartedPulling="2025-01-30 13:26:59.345081908 +0000 UTC m=+15.549272179" lastFinishedPulling="2025-01-30 13:27:08.211352182 +0000 UTC m=+24.415542453" observedRunningTime="2025-01-30 13:27:14.026954845 +0000 UTC m=+30.231145196" watchObservedRunningTime="2025-01-30 13:27:14.027627565 +0000 UTC m=+30.231817836" Jan 30 13:27:15.148827 systemd-networkd[1537]: cilium_host: Link UP Jan 30 13:27:15.149489 systemd-networkd[1537]: cilium_net: Link UP Jan 30 13:27:15.149620 systemd-networkd[1537]: cilium_net: Gained carrier Jan 30 13:27:15.151057 systemd-networkd[1537]: cilium_host: Gained carrier Jan 30 13:27:15.243178 systemd-networkd[1537]: cilium_vxlan: Link UP Jan 30 13:27:15.243185 systemd-networkd[1537]: cilium_vxlan: Gained carrier Jan 30 13:27:15.426490 systemd-networkd[1537]: cilium_net: Gained IPv6LL Jan 30 13:27:15.468007 kernel: NET: Registered PF_ALG protocol family Jan 30 13:27:15.660024 systemd-networkd[1537]: cilium_host: Gained IPv6LL Jan 30 13:27:16.092006 systemd-networkd[1537]: lxc_health: Link UP Jan 30 13:27:16.120131 systemd-networkd[1537]: lxc_health: Gained carrier Jan 30 13:27:16.222047 systemd-networkd[1537]: lxc85bbd2b13e5c: Link UP Jan 30 13:27:16.229938 kernel: eth0: renamed from tmp5a63e Jan 30 13:27:16.237074 systemd-networkd[1537]: lxc85bbd2b13e5c: Gained carrier Jan 30 13:27:16.703376 systemd-networkd[1537]: lxcb84f6487727f: Link UP Jan 30 13:27:16.716970 kernel: eth0: renamed from tmp86a76 Jan 30 13:27:16.724482 systemd-networkd[1537]: lxcb84f6487727f: Gained carrier Jan 30 13:27:17.002097 systemd-networkd[1537]: cilium_vxlan: Gained IPv6LL Jan 30 13:27:17.451113 systemd-networkd[1537]: lxc_health: Gained IPv6LL Jan 30 13:27:17.899016 systemd-networkd[1537]: lxcb84f6487727f: Gained IPv6LL Jan 30 13:27:18.154023 systemd-networkd[1537]: lxc85bbd2b13e5c: Gained IPv6LL Jan 30 13:27:19.834715 kubelet[3399]: I0130 13:27:19.834668 3399 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:27:19.878535 containerd[1750]: time="2025-01-30T13:27:19.878398820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:27:19.879356 containerd[1750]: time="2025-01-30T13:27:19.878636700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:27:19.879356 containerd[1750]: time="2025-01-30T13:27:19.878785100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:27:19.884147 containerd[1750]: time="2025-01-30T13:27:19.880697540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:27:19.914317 systemd[1]: Started cri-containerd-5a63eb0ab6d4d90fd5e76c5e18e62a6fbd315d44e769000653c513f6a701c647.scope - libcontainer container 5a63eb0ab6d4d90fd5e76c5e18e62a6fbd315d44e769000653c513f6a701c647. Jan 30 13:27:19.930110 containerd[1750]: time="2025-01-30T13:27:19.925974694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:27:19.930110 containerd[1750]: time="2025-01-30T13:27:19.926048054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:27:19.930110 containerd[1750]: time="2025-01-30T13:27:19.926058974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:27:19.930110 containerd[1750]: time="2025-01-30T13:27:19.926147734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:27:19.964110 systemd[1]: Started cri-containerd-86a765c8acf84f952c95763689c1251f183d2a13c5d4174f4a31e9b6c13afb70.scope - libcontainer container 86a765c8acf84f952c95763689c1251f183d2a13c5d4174f4a31e9b6c13afb70. Jan 30 13:27:19.971953 containerd[1750]: time="2025-01-30T13:27:19.971841928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-25xwn,Uid:ad815d85-f6d7-4e86-bdf5-efa1019b0076,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a63eb0ab6d4d90fd5e76c5e18e62a6fbd315d44e769000653c513f6a701c647\"" Jan 30 13:27:19.978099 containerd[1750]: time="2025-01-30T13:27:19.977815767Z" level=info msg="CreateContainer within sandbox \"5a63eb0ab6d4d90fd5e76c5e18e62a6fbd315d44e769000653c513f6a701c647\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:27:20.015467 containerd[1750]: time="2025-01-30T13:27:20.015432003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jdpj4,Uid:73ae20f1-52c6-43fc-9ea6-0c5cf8ab611d,Namespace:kube-system,Attempt:0,} returns sandbox id \"86a765c8acf84f952c95763689c1251f183d2a13c5d4174f4a31e9b6c13afb70\"" Jan 30 13:27:20.017091 containerd[1750]: time="2025-01-30T13:27:20.017012003Z" level=info msg="CreateContainer within sandbox \"5a63eb0ab6d4d90fd5e76c5e18e62a6fbd315d44e769000653c513f6a701c647\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"091bb5cbecd613011322ff1944989f8c8777e4fac033eddc0579cc723252d009\"" Jan 30 13:27:20.020161 containerd[1750]: time="2025-01-30T13:27:20.018272962Z" level=info msg="StartContainer for \"091bb5cbecd613011322ff1944989f8c8777e4fac033eddc0579cc723252d009\"" Jan 30 13:27:20.022612 containerd[1750]: time="2025-01-30T13:27:20.022576282Z" level=info msg="CreateContainer within sandbox \"86a765c8acf84f952c95763689c1251f183d2a13c5d4174f4a31e9b6c13afb70\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:27:20.056184 systemd[1]: Started cri-containerd-091bb5cbecd613011322ff1944989f8c8777e4fac033eddc0579cc723252d009.scope - libcontainer container 091bb5cbecd613011322ff1944989f8c8777e4fac033eddc0579cc723252d009. Jan 30 13:27:20.083142 containerd[1750]: time="2025-01-30T13:27:20.082900514Z" level=info msg="CreateContainer within sandbox \"86a765c8acf84f952c95763689c1251f183d2a13c5d4174f4a31e9b6c13afb70\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e7eb14915160018edae6fbed9abf2f4ecf96e3acd9507d751ab4dcb43a9b08af\"" Jan 30 13:27:20.087015 containerd[1750]: time="2025-01-30T13:27:20.085259874Z" level=info msg="StartContainer for \"e7eb14915160018edae6fbed9abf2f4ecf96e3acd9507d751ab4dcb43a9b08af\"" Jan 30 13:27:20.104163 containerd[1750]: time="2025-01-30T13:27:20.104116352Z" level=info msg="StartContainer for \"091bb5cbecd613011322ff1944989f8c8777e4fac033eddc0579cc723252d009\" returns successfully" Jan 30 13:27:20.125418 systemd[1]: Started cri-containerd-e7eb14915160018edae6fbed9abf2f4ecf96e3acd9507d751ab4dcb43a9b08af.scope - libcontainer container e7eb14915160018edae6fbed9abf2f4ecf96e3acd9507d751ab4dcb43a9b08af. Jan 30 13:27:20.180950 containerd[1750]: time="2025-01-30T13:27:20.180879822Z" level=info msg="StartContainer for \"e7eb14915160018edae6fbed9abf2f4ecf96e3acd9507d751ab4dcb43a9b08af\" returns successfully" Jan 30 13:27:21.090432 kubelet[3399]: I0130 13:27:21.089906 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-25xwn" podStartSLOduration=23.089887668 podStartE2EDuration="23.089887668s" podCreationTimestamp="2025-01-30 13:26:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:27:21.068713351 +0000 UTC m=+37.272903622" watchObservedRunningTime="2025-01-30 13:27:21.089887668 +0000 UTC m=+37.294077939" Jan 30 13:27:21.090432 kubelet[3399]: I0130 13:27:21.090311 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jdpj4" podStartSLOduration=23.090304428 podStartE2EDuration="23.090304428s" podCreationTimestamp="2025-01-30 13:26:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:27:21.090047228 +0000 UTC m=+37.294237499" watchObservedRunningTime="2025-01-30 13:27:21.090304428 +0000 UTC m=+37.294494659" Jan 30 13:28:37.474222 systemd[1]: Started sshd@7-10.200.20.21:22-10.200.16.10:39744.service - OpenSSH per-connection server daemon (10.200.16.10:39744). Jan 30 13:28:37.902780 sshd[4778]: Accepted publickey for core from 10.200.16.10 port 39744 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:28:37.904240 sshd-session[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:28:37.909004 systemd-logind[1690]: New session 10 of user core. Jan 30 13:28:37.918148 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:28:38.312966 sshd[4780]: Connection closed by 10.200.16.10 port 39744 Jan 30 13:28:38.313526 sshd-session[4778]: pam_unix(sshd:session): session closed for user core Jan 30 13:28:38.316424 systemd[1]: sshd@7-10.200.20.21:22-10.200.16.10:39744.service: Deactivated successfully. Jan 30 13:28:38.318369 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:28:38.320258 systemd-logind[1690]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:28:38.321608 systemd-logind[1690]: Removed session 10. Jan 30 13:28:43.395225 systemd[1]: Started sshd@8-10.200.20.21:22-10.200.16.10:39750.service - OpenSSH per-connection server daemon (10.200.16.10:39750). Jan 30 13:28:43.827591 sshd[4792]: Accepted publickey for core from 10.200.16.10 port 39750 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:28:43.829157 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:28:43.833344 systemd-logind[1690]: New session 11 of user core. Jan 30 13:28:43.839092 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:28:44.222480 sshd[4794]: Connection closed by 10.200.16.10 port 39750 Jan 30 13:28:44.222294 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Jan 30 13:28:44.226443 systemd[1]: sshd@8-10.200.20.21:22-10.200.16.10:39750.service: Deactivated successfully. Jan 30 13:28:44.228440 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:28:44.229419 systemd-logind[1690]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:28:44.230792 systemd-logind[1690]: Removed session 11. Jan 30 13:28:49.308246 systemd[1]: Started sshd@9-10.200.20.21:22-10.200.16.10:33578.service - OpenSSH per-connection server daemon (10.200.16.10:33578). Jan 30 13:28:49.721852 sshd[4808]: Accepted publickey for core from 10.200.16.10 port 33578 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:28:49.724134 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:28:49.729249 systemd-logind[1690]: New session 12 of user core. Jan 30 13:28:49.735146 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:28:50.105425 sshd[4810]: Connection closed by 10.200.16.10 port 33578 Jan 30 13:28:50.106107 sshd-session[4808]: pam_unix(sshd:session): session closed for user core Jan 30 13:28:50.110827 systemd[1]: sshd@9-10.200.20.21:22-10.200.16.10:33578.service: Deactivated successfully. Jan 30 13:28:50.113703 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:28:50.115614 systemd-logind[1690]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:28:50.117089 systemd-logind[1690]: Removed session 12. Jan 30 13:28:55.190741 systemd[1]: Started sshd@10-10.200.20.21:22-10.200.16.10:33586.service - OpenSSH per-connection server daemon (10.200.16.10:33586). Jan 30 13:28:55.631780 sshd[4822]: Accepted publickey for core from 10.200.16.10 port 33586 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:28:55.633093 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:28:55.637152 systemd-logind[1690]: New session 13 of user core. Jan 30 13:28:55.643072 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:28:56.011904 sshd[4824]: Connection closed by 10.200.16.10 port 33586 Jan 30 13:28:56.011529 sshd-session[4822]: pam_unix(sshd:session): session closed for user core Jan 30 13:28:56.015454 systemd[1]: sshd@10-10.200.20.21:22-10.200.16.10:33586.service: Deactivated successfully. Jan 30 13:28:56.017763 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:28:56.019344 systemd-logind[1690]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:28:56.020475 systemd-logind[1690]: Removed session 13. Jan 30 13:28:56.093234 systemd[1]: Started sshd@11-10.200.20.21:22-10.200.16.10:57774.service - OpenSSH per-connection server daemon (10.200.16.10:57774). Jan 30 13:28:56.520632 sshd[4837]: Accepted publickey for core from 10.200.16.10 port 57774 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:28:56.521868 sshd-session[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:28:56.526355 systemd-logind[1690]: New session 14 of user core. Jan 30 13:28:56.539186 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:28:56.932278 sshd[4839]: Connection closed by 10.200.16.10 port 57774 Jan 30 13:28:56.933391 sshd-session[4837]: pam_unix(sshd:session): session closed for user core Jan 30 13:28:56.936063 systemd[1]: sshd@11-10.200.20.21:22-10.200.16.10:57774.service: Deactivated successfully. Jan 30 13:28:56.938214 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:28:56.940379 systemd-logind[1690]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:28:56.941771 systemd-logind[1690]: Removed session 14. Jan 30 13:28:57.013817 systemd[1]: Started sshd@12-10.200.20.21:22-10.200.16.10:57776.service - OpenSSH per-connection server daemon (10.200.16.10:57776). Jan 30 13:28:57.451867 sshd[4848]: Accepted publickey for core from 10.200.16.10 port 57776 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:28:57.453220 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:28:57.457158 systemd-logind[1690]: New session 15 of user core. Jan 30 13:28:57.464069 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:28:57.849178 sshd[4850]: Connection closed by 10.200.16.10 port 57776 Jan 30 13:28:57.849872 sshd-session[4848]: pam_unix(sshd:session): session closed for user core Jan 30 13:28:57.853360 systemd[1]: sshd@12-10.200.20.21:22-10.200.16.10:57776.service: Deactivated successfully. Jan 30 13:28:57.855176 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:28:57.855971 systemd-logind[1690]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:28:57.857239 systemd-logind[1690]: Removed session 15. Jan 30 13:29:02.926525 systemd[1]: Started sshd@13-10.200.20.21:22-10.200.16.10:57778.service - OpenSSH per-connection server daemon (10.200.16.10:57778). Jan 30 13:29:03.360408 sshd[4862]: Accepted publickey for core from 10.200.16.10 port 57778 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:29:03.361693 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:29:03.365757 systemd-logind[1690]: New session 16 of user core. Jan 30 13:29:03.371087 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:29:03.753081 sshd[4864]: Connection closed by 10.200.16.10 port 57778 Jan 30 13:29:03.753595 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Jan 30 13:29:03.757443 systemd[1]: sshd@13-10.200.20.21:22-10.200.16.10:57778.service: Deactivated successfully. Jan 30 13:29:03.759537 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:29:03.760386 systemd-logind[1690]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:29:03.761672 systemd-logind[1690]: Removed session 16. Jan 30 13:29:03.836177 systemd[1]: Started sshd@14-10.200.20.21:22-10.200.16.10:57780.service - OpenSSH per-connection server daemon (10.200.16.10:57780). Jan 30 13:29:04.267247 sshd[4875]: Accepted publickey for core from 10.200.16.10 port 57780 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:29:04.268543 sshd-session[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:29:04.273973 systemd-logind[1690]: New session 17 of user core. Jan 30 13:29:04.279096 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:29:04.690028 sshd[4877]: Connection closed by 10.200.16.10 port 57780 Jan 30 13:29:04.689381 sshd-session[4875]: pam_unix(sshd:session): session closed for user core Jan 30 13:29:04.692103 systemd[1]: sshd@14-10.200.20.21:22-10.200.16.10:57780.service: Deactivated successfully. Jan 30 13:29:04.694192 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:29:04.696538 systemd-logind[1690]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:29:04.698170 systemd-logind[1690]: Removed session 17. Jan 30 13:29:04.768677 systemd[1]: Started sshd@15-10.200.20.21:22-10.200.16.10:57784.service - OpenSSH per-connection server daemon (10.200.16.10:57784). Jan 30 13:29:05.209361 sshd[4886]: Accepted publickey for core from 10.200.16.10 port 57784 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:29:05.210791 sshd-session[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:29:05.214794 systemd-logind[1690]: New session 18 of user core. Jan 30 13:29:05.223080 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:29:06.988874 sshd[4888]: Connection closed by 10.200.16.10 port 57784 Jan 30 13:29:06.989991 sshd-session[4886]: pam_unix(sshd:session): session closed for user core Jan 30 13:29:06.992751 systemd[1]: sshd@15-10.200.20.21:22-10.200.16.10:57784.service: Deactivated successfully. Jan 30 13:29:06.994675 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:29:06.996584 systemd-logind[1690]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:29:06.998517 systemd-logind[1690]: Removed session 18. Jan 30 13:29:07.064843 systemd[1]: Started sshd@16-10.200.20.21:22-10.200.16.10:42722.service - OpenSSH per-connection server daemon (10.200.16.10:42722). Jan 30 13:29:07.496754 sshd[4904]: Accepted publickey for core from 10.200.16.10 port 42722 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:29:07.498183 sshd-session[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:29:07.502590 systemd-logind[1690]: New session 19 of user core. Jan 30 13:29:07.511089 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:29:07.989228 sshd[4906]: Connection closed by 10.200.16.10 port 42722 Jan 30 13:29:07.989858 sshd-session[4904]: pam_unix(sshd:session): session closed for user core Jan 30 13:29:07.993543 systemd[1]: sshd@16-10.200.20.21:22-10.200.16.10:42722.service: Deactivated successfully. Jan 30 13:29:07.995569 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:29:07.996498 systemd-logind[1690]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:29:07.997501 systemd-logind[1690]: Removed session 19. Jan 30 13:29:08.068149 systemd[1]: Started sshd@17-10.200.20.21:22-10.200.16.10:42724.service - OpenSSH per-connection server daemon (10.200.16.10:42724). Jan 30 13:29:08.502551 sshd[4915]: Accepted publickey for core from 10.200.16.10 port 42724 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:29:08.503947 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:29:08.508990 systemd-logind[1690]: New session 20 of user core. Jan 30 13:29:08.515099 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:29:08.896987 sshd[4917]: Connection closed by 10.200.16.10 port 42724 Jan 30 13:29:08.896365 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Jan 30 13:29:08.900003 systemd[1]: sshd@17-10.200.20.21:22-10.200.16.10:42724.service: Deactivated successfully. Jan 30 13:29:08.901715 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:29:08.902452 systemd-logind[1690]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:29:08.903809 systemd-logind[1690]: Removed session 20. Jan 30 13:29:13.978766 systemd[1]: Started sshd@18-10.200.20.21:22-10.200.16.10:42736.service - OpenSSH per-connection server daemon (10.200.16.10:42736). Jan 30 13:29:14.414662 sshd[4931]: Accepted publickey for core from 10.200.16.10 port 42736 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:29:14.416043 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:29:14.420064 systemd-logind[1690]: New session 21 of user core. Jan 30 13:29:14.426103 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:29:14.791951 sshd[4933]: Connection closed by 10.200.16.10 port 42736 Jan 30 13:29:14.792536 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Jan 30 13:29:14.796112 systemd[1]: sshd@18-10.200.20.21:22-10.200.16.10:42736.service: Deactivated successfully. Jan 30 13:29:14.798856 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:29:14.800074 systemd-logind[1690]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:29:14.801118 systemd-logind[1690]: Removed session 21. Jan 30 13:29:19.873250 systemd[1]: Started sshd@19-10.200.20.21:22-10.200.16.10:47722.service - OpenSSH per-connection server daemon (10.200.16.10:47722). Jan 30 13:29:20.305422 sshd[4944]: Accepted publickey for core from 10.200.16.10 port 47722 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:29:20.306734 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:29:20.311716 systemd-logind[1690]: New session 22 of user core. Jan 30 13:29:20.320140 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:29:20.689469 sshd[4946]: Connection closed by 10.200.16.10 port 47722 Jan 30 13:29:20.690237 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Jan 30 13:29:20.694487 systemd[1]: sshd@19-10.200.20.21:22-10.200.16.10:47722.service: Deactivated successfully. Jan 30 13:29:20.697530 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:29:20.698393 systemd-logind[1690]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:29:20.699661 systemd-logind[1690]: Removed session 22. Jan 30 13:29:25.766083 systemd[1]: Started sshd@20-10.200.20.21:22-10.200.16.10:47728.service - OpenSSH per-connection server daemon (10.200.16.10:47728). Jan 30 13:29:26.183847 sshd[4957]: Accepted publickey for core from 10.200.16.10 port 47728 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:29:26.185163 sshd-session[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:29:26.189274 systemd-logind[1690]: New session 23 of user core. Jan 30 13:29:26.195090 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:29:26.565445 sshd[4959]: Connection closed by 10.200.16.10 port 47728 Jan 30 13:29:26.566212 sshd-session[4957]: pam_unix(sshd:session): session closed for user core Jan 30 13:29:26.569647 systemd[1]: sshd@20-10.200.20.21:22-10.200.16.10:47728.service: Deactivated successfully. Jan 30 13:29:26.572395 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:29:26.573448 systemd-logind[1690]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:29:26.574548 systemd-logind[1690]: Removed session 23. Jan 30 13:29:26.658196 systemd[1]: Started sshd@21-10.200.20.21:22-10.200.16.10:43560.service - OpenSSH per-connection server daemon (10.200.16.10:43560). Jan 30 13:29:27.084437 sshd[4969]: Accepted publickey for core from 10.200.16.10 port 43560 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:29:27.085877 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:29:27.090019 systemd-logind[1690]: New session 24 of user core. Jan 30 13:29:27.099118 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:29:29.012328 systemd[1]: run-containerd-runc-k8s.io-b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59-runc.C6y7CJ.mount: Deactivated successfully. Jan 30 13:29:29.018667 containerd[1750]: time="2025-01-30T13:29:29.018249115Z" level=info msg="StopContainer for \"4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4\" with timeout 30 (s)" Jan 30 13:29:29.019706 containerd[1750]: time="2025-01-30T13:29:29.019474035Z" level=info msg="Stop container \"4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4\" with signal terminated" Jan 30 13:29:29.029314 containerd[1750]: time="2025-01-30T13:29:29.029264631Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:29:29.037024 systemd[1]: cri-containerd-4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4.scope: Deactivated successfully. Jan 30 13:29:29.041291 containerd[1750]: time="2025-01-30T13:29:29.041116386Z" level=info msg="StopContainer for \"b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59\" with timeout 2 (s)" Jan 30 13:29:29.041896 containerd[1750]: time="2025-01-30T13:29:29.041860745Z" level=info msg="Stop container \"b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59\" with signal terminated" Jan 30 13:29:29.050709 systemd-networkd[1537]: lxc_health: Link DOWN Jan 30 13:29:29.050718 systemd-networkd[1537]: lxc_health: Lost carrier Jan 30 13:29:29.068436 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4-rootfs.mount: Deactivated successfully. Jan 30 13:29:29.073333 systemd[1]: cri-containerd-b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59.scope: Deactivated successfully. Jan 30 13:29:29.073590 systemd[1]: cri-containerd-b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59.scope: Consumed 6.389s CPU time. Jan 30 13:29:29.095319 containerd[1750]: time="2025-01-30T13:29:29.095253243Z" level=info msg="shim disconnected" id=4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4 namespace=k8s.io Jan 30 13:29:29.095319 containerd[1750]: time="2025-01-30T13:29:29.095310603Z" level=warning msg="cleaning up after shim disconnected" id=4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4 namespace=k8s.io Jan 30 13:29:29.095319 containerd[1750]: time="2025-01-30T13:29:29.095319563Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:29:29.097527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59-rootfs.mount: Deactivated successfully. Jan 30 13:29:29.110986 containerd[1750]: time="2025-01-30T13:29:29.110905996Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:29:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:29:29.124794 containerd[1750]: time="2025-01-30T13:29:29.124720031Z" level=info msg="StopContainer for \"4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4\" returns successfully" Jan 30 13:29:29.126699 containerd[1750]: time="2025-01-30T13:29:29.125507630Z" level=info msg="StopPodSandbox for \"dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c\"" Jan 30 13:29:29.126699 containerd[1750]: time="2025-01-30T13:29:29.125545270Z" level=info msg="Container to stop \"4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:29:29.128758 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c-shm.mount: Deactivated successfully. Jan 30 13:29:29.130950 containerd[1750]: time="2025-01-30T13:29:29.130786228Z" level=info msg="shim disconnected" id=b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59 namespace=k8s.io Jan 30 13:29:29.130950 containerd[1750]: time="2025-01-30T13:29:29.130852068Z" level=warning msg="cleaning up after shim disconnected" id=b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59 namespace=k8s.io Jan 30 13:29:29.130950 containerd[1750]: time="2025-01-30T13:29:29.130860868Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:29:29.140778 systemd[1]: cri-containerd-dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c.scope: Deactivated successfully. Jan 30 13:29:29.157530 containerd[1750]: time="2025-01-30T13:29:29.157265337Z" level=info msg="StopContainer for \"b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59\" returns successfully" Jan 30 13:29:29.159649 containerd[1750]: time="2025-01-30T13:29:29.159470056Z" level=info msg="StopPodSandbox for \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\"" Jan 30 13:29:29.159649 containerd[1750]: time="2025-01-30T13:29:29.159518856Z" level=info msg="Container to stop \"d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:29:29.159649 containerd[1750]: time="2025-01-30T13:29:29.159529816Z" level=info msg="Container to stop \"c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:29:29.159649 containerd[1750]: time="2025-01-30T13:29:29.159540496Z" level=info msg="Container to stop \"0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:29:29.159649 containerd[1750]: time="2025-01-30T13:29:29.159551736Z" level=info msg="Container to stop \"94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:29:29.159649 containerd[1750]: time="2025-01-30T13:29:29.159560496Z" level=info msg="Container to stop \"b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:29:29.167660 systemd[1]: cri-containerd-649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564.scope: Deactivated successfully. Jan 30 13:29:29.181773 containerd[1750]: time="2025-01-30T13:29:29.181693127Z" level=info msg="shim disconnected" id=dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c namespace=k8s.io Jan 30 13:29:29.181773 containerd[1750]: time="2025-01-30T13:29:29.181750767Z" level=warning msg="cleaning up after shim disconnected" id=dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c namespace=k8s.io Jan 30 13:29:29.181773 containerd[1750]: time="2025-01-30T13:29:29.181758847Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:29:29.198353 containerd[1750]: time="2025-01-30T13:29:29.198193800Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:29:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:29:29.201044 containerd[1750]: time="2025-01-30T13:29:29.200046199Z" level=info msg="TearDown network for sandbox \"dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c\" successfully" Jan 30 13:29:29.201044 containerd[1750]: time="2025-01-30T13:29:29.200083679Z" level=info msg="StopPodSandbox for \"dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c\" returns successfully" Jan 30 13:29:29.203705 containerd[1750]: time="2025-01-30T13:29:29.203247318Z" level=info msg="shim disconnected" id=649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564 namespace=k8s.io Jan 30 13:29:29.203705 containerd[1750]: time="2025-01-30T13:29:29.203305998Z" level=warning msg="cleaning up after shim disconnected" id=649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564 namespace=k8s.io Jan 30 13:29:29.203705 containerd[1750]: time="2025-01-30T13:29:29.203314318Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:29:29.220846 containerd[1750]: time="2025-01-30T13:29:29.220797590Z" level=info msg="TearDown network for sandbox \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\" successfully" Jan 30 13:29:29.220846 containerd[1750]: time="2025-01-30T13:29:29.220836910Z" level=info msg="StopPodSandbox for \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\" returns successfully" Jan 30 13:29:29.254467 kubelet[3399]: I0130 13:29:29.254400 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-cilium-cgroup\") pod \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " Jan 30 13:29:29.254467 kubelet[3399]: I0130 13:29:29.254446 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-bpf-maps\") pod \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " Jan 30 13:29:29.254467 kubelet[3399]: I0130 13:29:29.254465 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-host-proc-sys-kernel\") pod \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " Jan 30 13:29:29.254942 kubelet[3399]: I0130 13:29:29.254489 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-cilium-config-path\") pod \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " Jan 30 13:29:29.254942 kubelet[3399]: I0130 13:29:29.254508 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-etc-cni-netd\") pod \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " Jan 30 13:29:29.254942 kubelet[3399]: I0130 13:29:29.254523 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-lib-modules\") pod \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " Jan 30 13:29:29.254942 kubelet[3399]: I0130 13:29:29.254563 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48105e32-c92e-40a6-b065-786ce3a90f69-cilium-config-path\") pod \"48105e32-c92e-40a6-b065-786ce3a90f69\" (UID: \"48105e32-c92e-40a6-b065-786ce3a90f69\") " Jan 30 13:29:29.254942 kubelet[3399]: I0130 13:29:29.254581 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-hubble-tls\") pod \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " Jan 30 13:29:29.254942 kubelet[3399]: I0130 13:29:29.254599 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-xtables-lock\") pod \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " Jan 30 13:29:29.255088 kubelet[3399]: I0130 13:29:29.254614 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-hostproc\") pod \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " Jan 30 13:29:29.255088 kubelet[3399]: I0130 13:29:29.254633 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv2n7\" (UniqueName: \"kubernetes.io/projected/48105e32-c92e-40a6-b065-786ce3a90f69-kube-api-access-xv2n7\") pod \"48105e32-c92e-40a6-b065-786ce3a90f69\" (UID: \"48105e32-c92e-40a6-b065-786ce3a90f69\") " Jan 30 13:29:29.255088 kubelet[3399]: I0130 13:29:29.254650 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k26mz\" (UniqueName: \"kubernetes.io/projected/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-kube-api-access-k26mz\") pod \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " Jan 30 13:29:29.255088 kubelet[3399]: I0130 13:29:29.254666 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-host-proc-sys-net\") pod \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " Jan 30 13:29:29.255088 kubelet[3399]: I0130 13:29:29.254683 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-clustermesh-secrets\") pod \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " Jan 30 13:29:29.255088 kubelet[3399]: I0130 13:29:29.254698 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-cilium-run\") pod \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " Jan 30 13:29:29.255216 kubelet[3399]: I0130 13:29:29.254712 3399 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-cni-path\") pod \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\" (UID: \"d50ee0ad-ade5-4aeb-b30a-6847f68f108a\") " Jan 30 13:29:29.255216 kubelet[3399]: I0130 13:29:29.254784 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-cni-path" (OuterVolumeSpecName: "cni-path") pod "d50ee0ad-ade5-4aeb-b30a-6847f68f108a" (UID: "d50ee0ad-ade5-4aeb-b30a-6847f68f108a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:29:29.255216 kubelet[3399]: I0130 13:29:29.254817 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d50ee0ad-ade5-4aeb-b30a-6847f68f108a" (UID: "d50ee0ad-ade5-4aeb-b30a-6847f68f108a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:29:29.255216 kubelet[3399]: I0130 13:29:29.254832 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d50ee0ad-ade5-4aeb-b30a-6847f68f108a" (UID: "d50ee0ad-ade5-4aeb-b30a-6847f68f108a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:29:29.255216 kubelet[3399]: I0130 13:29:29.254846 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d50ee0ad-ade5-4aeb-b30a-6847f68f108a" (UID: "d50ee0ad-ade5-4aeb-b30a-6847f68f108a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:29:29.255940 kubelet[3399]: I0130 13:29:29.255372 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-hostproc" (OuterVolumeSpecName: "hostproc") pod "d50ee0ad-ade5-4aeb-b30a-6847f68f108a" (UID: "d50ee0ad-ade5-4aeb-b30a-6847f68f108a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:29:29.255940 kubelet[3399]: I0130 13:29:29.255411 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d50ee0ad-ade5-4aeb-b30a-6847f68f108a" (UID: "d50ee0ad-ade5-4aeb-b30a-6847f68f108a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:29:29.255940 kubelet[3399]: I0130 13:29:29.255429 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d50ee0ad-ade5-4aeb-b30a-6847f68f108a" (UID: "d50ee0ad-ade5-4aeb-b30a-6847f68f108a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:29:29.259001 kubelet[3399]: I0130 13:29:29.257820 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d50ee0ad-ade5-4aeb-b30a-6847f68f108a" (UID: "d50ee0ad-ade5-4aeb-b30a-6847f68f108a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:29:29.259601 kubelet[3399]: I0130 13:29:29.259571 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48105e32-c92e-40a6-b065-786ce3a90f69-kube-api-access-xv2n7" (OuterVolumeSpecName: "kube-api-access-xv2n7") pod "48105e32-c92e-40a6-b065-786ce3a90f69" (UID: "48105e32-c92e-40a6-b065-786ce3a90f69"). InnerVolumeSpecName "kube-api-access-xv2n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:29:29.259846 kubelet[3399]: I0130 13:29:29.259802 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d50ee0ad-ade5-4aeb-b30a-6847f68f108a" (UID: "d50ee0ad-ade5-4aeb-b30a-6847f68f108a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:29:29.260990 kubelet[3399]: I0130 13:29:29.260892 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d50ee0ad-ade5-4aeb-b30a-6847f68f108a" (UID: "d50ee0ad-ade5-4aeb-b30a-6847f68f108a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:29:29.261247 kubelet[3399]: I0130 13:29:29.261222 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d50ee0ad-ade5-4aeb-b30a-6847f68f108a" (UID: "d50ee0ad-ade5-4aeb-b30a-6847f68f108a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:29:29.261342 kubelet[3399]: I0130 13:29:29.261321 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d50ee0ad-ade5-4aeb-b30a-6847f68f108a" (UID: "d50ee0ad-ade5-4aeb-b30a-6847f68f108a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:29:29.262990 kubelet[3399]: I0130 13:29:29.262678 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-kube-api-access-k26mz" (OuterVolumeSpecName: "kube-api-access-k26mz") pod "d50ee0ad-ade5-4aeb-b30a-6847f68f108a" (UID: "d50ee0ad-ade5-4aeb-b30a-6847f68f108a"). InnerVolumeSpecName "kube-api-access-k26mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:29:29.263103 kubelet[3399]: I0130 13:29:29.262998 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48105e32-c92e-40a6-b065-786ce3a90f69-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "48105e32-c92e-40a6-b065-786ce3a90f69" (UID: "48105e32-c92e-40a6-b065-786ce3a90f69"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:29:29.265164 kubelet[3399]: I0130 13:29:29.265125 3399 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d50ee0ad-ade5-4aeb-b30a-6847f68f108a" (UID: "d50ee0ad-ade5-4aeb-b30a-6847f68f108a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:29:29.301359 kubelet[3399]: I0130 13:29:29.301315 3399 scope.go:117] "RemoveContainer" containerID="4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4" Jan 30 13:29:29.304403 containerd[1750]: time="2025-01-30T13:29:29.304351635Z" level=info msg="RemoveContainer for \"4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4\"" Jan 30 13:29:29.307163 systemd[1]: Removed slice kubepods-besteffort-pod48105e32_c92e_40a6_b065_786ce3a90f69.slice - libcontainer container kubepods-besteffort-pod48105e32_c92e_40a6_b065_786ce3a90f69.slice. Jan 30 13:29:29.317457 containerd[1750]: time="2025-01-30T13:29:29.316824310Z" level=info msg="RemoveContainer for \"4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4\" returns successfully" Jan 30 13:29:29.317189 systemd[1]: Removed slice kubepods-burstable-podd50ee0ad_ade5_4aeb_b30a_6847f68f108a.slice - libcontainer container kubepods-burstable-podd50ee0ad_ade5_4aeb_b30a_6847f68f108a.slice. Jan 30 13:29:29.317309 systemd[1]: kubepods-burstable-podd50ee0ad_ade5_4aeb_b30a_6847f68f108a.slice: Consumed 6.463s CPU time. Jan 30 13:29:29.322591 kubelet[3399]: I0130 13:29:29.321476 3399 scope.go:117] "RemoveContainer" containerID="4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4" Jan 30 13:29:29.324261 containerd[1750]: time="2025-01-30T13:29:29.324026907Z" level=error msg="ContainerStatus for \"4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4\": not found" Jan 30 13:29:29.325347 kubelet[3399]: E0130 13:29:29.325284 3399 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4\": not found" containerID="4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4" Jan 30 13:29:29.325582 kubelet[3399]: I0130 13:29:29.325506 3399 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4"} err="failed to get container status \"4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c76e8fc12f598d28e9c330a7dd3a82df3766adcc057a1adc6f3361f474c45a4\": not found" Jan 30 13:29:29.325666 kubelet[3399]: I0130 13:29:29.325655 3399 scope.go:117] "RemoveContainer" containerID="b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59" Jan 30 13:29:29.328546 containerd[1750]: time="2025-01-30T13:29:29.328462825Z" level=info msg="RemoveContainer for \"b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59\"" Jan 30 13:29:29.338483 containerd[1750]: time="2025-01-30T13:29:29.338438701Z" level=info msg="RemoveContainer for \"b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59\" returns successfully" Jan 30 13:29:29.339192 kubelet[3399]: I0130 13:29:29.339160 3399 scope.go:117] "RemoveContainer" containerID="c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a" Jan 30 13:29:29.341736 containerd[1750]: time="2025-01-30T13:29:29.341425940Z" level=info msg="RemoveContainer for \"c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a\"" Jan 30 13:29:29.349434 containerd[1750]: time="2025-01-30T13:29:29.349386417Z" level=info msg="RemoveContainer for \"c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a\" returns successfully" Jan 30 13:29:29.349843 kubelet[3399]: I0130 13:29:29.349678 3399 scope.go:117] "RemoveContainer" containerID="94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b" Jan 30 13:29:29.353502 containerd[1750]: time="2025-01-30T13:29:29.353219655Z" level=info msg="RemoveContainer for \"94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b\"" Jan 30 13:29:29.355070 kubelet[3399]: I0130 13:29:29.354972 3399 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-cilium-run\") on node \"ci-4186.1.0-a-a27a4db638\" DevicePath \"\"" Jan 30 13:29:29.355070 kubelet[3399]: I0130 13:29:29.355001 3399 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-cni-path\") on node \"ci-4186.1.0-a-a27a4db638\" DevicePath \"\"" Jan 30 13:29:29.355070 kubelet[3399]: I0130 13:29:29.355011 3399 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-host-proc-sys-kernel\") on node \"ci-4186.1.0-a-a27a4db638\" DevicePath \"\"" Jan 30 13:29:29.355070 kubelet[3399]: I0130 13:29:29.355020 3399 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-cilium-config-path\") on node \"ci-4186.1.0-a-a27a4db638\" DevicePath \"\"" Jan 30 13:29:29.355070 kubelet[3399]: I0130 13:29:29.355031 3399 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-cilium-cgroup\") on node \"ci-4186.1.0-a-a27a4db638\" DevicePath \"\"" Jan 30 13:29:29.355070 kubelet[3399]: I0130 13:29:29.355039 3399 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-bpf-maps\") on node \"ci-4186.1.0-a-a27a4db638\" DevicePath \"\"" Jan 30 13:29:29.355070 kubelet[3399]: I0130 13:29:29.355047 3399 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-lib-modules\") on node \"ci-4186.1.0-a-a27a4db638\" DevicePath \"\"" Jan 30 13:29:29.355070 kubelet[3399]: I0130 13:29:29.355054 3399 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-etc-cni-netd\") on node \"ci-4186.1.0-a-a27a4db638\" DevicePath \"\"" Jan 30 13:29:29.355379 kubelet[3399]: I0130 13:29:29.355061 3399 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-hubble-tls\") on node \"ci-4186.1.0-a-a27a4db638\" DevicePath \"\"" Jan 30 13:29:29.355379 kubelet[3399]: I0130 13:29:29.355069 3399 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-xtables-lock\") on node \"ci-4186.1.0-a-a27a4db638\" DevicePath \"\"" Jan 30 13:29:29.355379 kubelet[3399]: I0130 13:29:29.355077 3399 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48105e32-c92e-40a6-b065-786ce3a90f69-cilium-config-path\") on node \"ci-4186.1.0-a-a27a4db638\" DevicePath \"\"" Jan 30 13:29:29.355379 kubelet[3399]: I0130 13:29:29.355084 3399 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-k26mz\" (UniqueName: \"kubernetes.io/projected/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-kube-api-access-k26mz\") on node \"ci-4186.1.0-a-a27a4db638\" DevicePath \"\"" Jan 30 13:29:29.355379 kubelet[3399]: I0130 13:29:29.355093 3399 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-hostproc\") on node \"ci-4186.1.0-a-a27a4db638\" DevicePath \"\"" Jan 30 13:29:29.355379 kubelet[3399]: I0130 13:29:29.355102 3399 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xv2n7\" (UniqueName: \"kubernetes.io/projected/48105e32-c92e-40a6-b065-786ce3a90f69-kube-api-access-xv2n7\") on node \"ci-4186.1.0-a-a27a4db638\" DevicePath \"\"" Jan 30 13:29:29.355379 kubelet[3399]: I0130 13:29:29.355112 3399 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-host-proc-sys-net\") on node \"ci-4186.1.0-a-a27a4db638\" DevicePath \"\"" Jan 30 13:29:29.355379 kubelet[3399]: I0130 13:29:29.355120 3399 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d50ee0ad-ade5-4aeb-b30a-6847f68f108a-clustermesh-secrets\") on node \"ci-4186.1.0-a-a27a4db638\" DevicePath \"\"" Jan 30 13:29:29.363408 containerd[1750]: time="2025-01-30T13:29:29.363309891Z" level=info msg="RemoveContainer for \"94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b\" returns successfully" Jan 30 13:29:29.363704 kubelet[3399]: I0130 13:29:29.363672 3399 scope.go:117] "RemoveContainer" containerID="d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7" Jan 30 13:29:29.364884 containerd[1750]: time="2025-01-30T13:29:29.364854930Z" level=info msg="RemoveContainer for \"d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7\"" Jan 30 13:29:29.374807 containerd[1750]: time="2025-01-30T13:29:29.374706366Z" level=info msg="RemoveContainer for \"d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7\" returns successfully" Jan 30 13:29:29.375301 kubelet[3399]: I0130 13:29:29.375142 3399 scope.go:117] "RemoveContainer" containerID="0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da" Jan 30 13:29:29.376865 containerd[1750]: time="2025-01-30T13:29:29.376607605Z" level=info msg="RemoveContainer for \"0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da\"" Jan 30 13:29:29.384509 containerd[1750]: time="2025-01-30T13:29:29.384469562Z" level=info msg="RemoveContainer for \"0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da\" returns successfully" Jan 30 13:29:29.384886 kubelet[3399]: I0130 13:29:29.384843 3399 scope.go:117] "RemoveContainer" containerID="b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59" Jan 30 13:29:29.385222 containerd[1750]: time="2025-01-30T13:29:29.385119362Z" level=error msg="ContainerStatus for \"b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59\": not found" Jan 30 13:29:29.385509 kubelet[3399]: E0130 13:29:29.385370 3399 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59\": not found" containerID="b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59" Jan 30 13:29:29.385509 kubelet[3399]: I0130 13:29:29.385401 3399 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59"} err="failed to get container status \"b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59\": rpc error: code = NotFound desc = an error occurred when try to find container \"b35dff4ca09e9a924fc4415e8fa3a229066671ac2bd380a2412caa30fb463a59\": not found" Jan 30 13:29:29.385509 kubelet[3399]: I0130 13:29:29.385424 3399 scope.go:117] "RemoveContainer" containerID="c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a" Jan 30 13:29:29.385710 containerd[1750]: time="2025-01-30T13:29:29.385592482Z" level=error msg="ContainerStatus for \"c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a\": not found" Jan 30 13:29:29.385931 kubelet[3399]: E0130 13:29:29.385831 3399 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a\": not found" containerID="c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a" Jan 30 13:29:29.385931 kubelet[3399]: I0130 13:29:29.385865 3399 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a"} err="failed to get container status \"c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c07a99821bacf0378b1f4266fbc26d05cc989a131cb2fc87f93e34c56ed97c4a\": not found" Jan 30 13:29:29.385931 kubelet[3399]: I0130 13:29:29.385881 3399 scope.go:117] "RemoveContainer" containerID="94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b" Jan 30 13:29:29.386255 containerd[1750]: time="2025-01-30T13:29:29.386189841Z" level=error msg="ContainerStatus for \"94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b\": not found" Jan 30 13:29:29.386352 kubelet[3399]: E0130 13:29:29.386312 3399 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b\": not found" containerID="94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b" Jan 30 13:29:29.386352 kubelet[3399]: I0130 13:29:29.386341 3399 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b"} err="failed to get container status \"94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b\": rpc error: code = NotFound desc = an error occurred when try to find container \"94b1a9cdb1df8c110c7c01404e41bbf893b979bf66aa7de1341fcac20ac4541b\": not found" Jan 30 13:29:29.386512 kubelet[3399]: I0130 13:29:29.386357 3399 scope.go:117] "RemoveContainer" containerID="d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7" Jan 30 13:29:29.386820 containerd[1750]: time="2025-01-30T13:29:29.386746281Z" level=error msg="ContainerStatus for \"d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7\": not found" Jan 30 13:29:29.386945 kubelet[3399]: E0130 13:29:29.386875 3399 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7\": not found" containerID="d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7" Jan 30 13:29:29.386945 kubelet[3399]: I0130 13:29:29.386900 3399 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7"} err="failed to get container status \"d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8af08f7b43769149369f3de2180a6235c94152e1fec0c637045a2356299add7\": not found" Jan 30 13:29:29.386945 kubelet[3399]: I0130 13:29:29.386922 3399 scope.go:117] "RemoveContainer" containerID="0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da" Jan 30 13:29:29.387498 containerd[1750]: time="2025-01-30T13:29:29.387307921Z" level=error msg="ContainerStatus for \"0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da\": not found" Jan 30 13:29:29.387569 kubelet[3399]: E0130 13:29:29.387453 3399 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da\": not found" containerID="0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da" Jan 30 13:29:29.387569 kubelet[3399]: I0130 13:29:29.387475 3399 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da"} err="failed to get container status \"0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d65b9f99d4c5d758ef31ee9f98336cb7320b253c5d8bc2021e8966bbc3600da\": not found" Jan 30 13:29:29.895755 kubelet[3399]: I0130 13:29:29.895715 3399 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48105e32-c92e-40a6-b065-786ce3a90f69" path="/var/lib/kubelet/pods/48105e32-c92e-40a6-b065-786ce3a90f69/volumes" Jan 30 13:29:29.896172 kubelet[3399]: I0130 13:29:29.896148 3399 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d50ee0ad-ade5-4aeb-b30a-6847f68f108a" path="/var/lib/kubelet/pods/d50ee0ad-ade5-4aeb-b30a-6847f68f108a/volumes" Jan 30 13:29:30.006941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564-rootfs.mount: Deactivated successfully. Jan 30 13:29:30.007042 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564-shm.mount: Deactivated successfully. Jan 30 13:29:30.007105 systemd[1]: var-lib-kubelet-pods-d50ee0ad\x2dade5\x2d4aeb\x2db30a\x2d6847f68f108a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk26mz.mount: Deactivated successfully. Jan 30 13:29:30.007168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c-rootfs.mount: Deactivated successfully. Jan 30 13:29:30.007214 systemd[1]: var-lib-kubelet-pods-48105e32\x2dc92e\x2d40a6\x2db065\x2d786ce3a90f69-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxv2n7.mount: Deactivated successfully. Jan 30 13:29:30.007262 systemd[1]: var-lib-kubelet-pods-d50ee0ad\x2dade5\x2d4aeb\x2db30a\x2d6847f68f108a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:29:30.007310 systemd[1]: var-lib-kubelet-pods-d50ee0ad\x2dade5\x2d4aeb\x2db30a\x2d6847f68f108a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:29:31.011476 sshd[4971]: Connection closed by 10.200.16.10 port 43560 Jan 30 13:29:31.012113 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Jan 30 13:29:31.015720 systemd[1]: sshd@21-10.200.20.21:22-10.200.16.10:43560.service: Deactivated successfully. Jan 30 13:29:31.018292 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:29:31.018673 systemd[1]: session-24.scope: Consumed 1.036s CPU time. Jan 30 13:29:31.019593 systemd-logind[1690]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:29:31.020538 systemd-logind[1690]: Removed session 24. Jan 30 13:29:31.084074 systemd[1]: Started sshd@22-10.200.20.21:22-10.200.16.10:43576.service - OpenSSH per-connection server daemon (10.200.16.10:43576). Jan 30 13:29:31.501310 sshd[5129]: Accepted publickey for core from 10.200.16.10 port 43576 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:29:31.502641 sshd-session[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:29:31.506477 systemd-logind[1690]: New session 25 of user core. Jan 30 13:29:31.516094 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:29:32.787586 kubelet[3399]: I0130 13:29:32.787537 3399 topology_manager.go:215] "Topology Admit Handler" podUID="7953f786-8e43-402e-bb90-7bc44ed202bd" podNamespace="kube-system" podName="cilium-rwhdn" Jan 30 13:29:32.787586 kubelet[3399]: E0130 13:29:32.787592 3399 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="48105e32-c92e-40a6-b065-786ce3a90f69" containerName="cilium-operator" Jan 30 13:29:32.788036 kubelet[3399]: E0130 13:29:32.787614 3399 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d50ee0ad-ade5-4aeb-b30a-6847f68f108a" containerName="mount-cgroup" Jan 30 13:29:32.788036 kubelet[3399]: E0130 13:29:32.787622 3399 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d50ee0ad-ade5-4aeb-b30a-6847f68f108a" containerName="apply-sysctl-overwrites" Jan 30 13:29:32.788036 kubelet[3399]: E0130 13:29:32.787629 3399 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d50ee0ad-ade5-4aeb-b30a-6847f68f108a" containerName="mount-bpf-fs" Jan 30 13:29:32.788036 kubelet[3399]: E0130 13:29:32.787635 3399 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d50ee0ad-ade5-4aeb-b30a-6847f68f108a" containerName="clean-cilium-state" Jan 30 13:29:32.788036 kubelet[3399]: E0130 13:29:32.787642 3399 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d50ee0ad-ade5-4aeb-b30a-6847f68f108a" containerName="cilium-agent" Jan 30 13:29:32.788036 kubelet[3399]: I0130 13:29:32.787663 3399 memory_manager.go:354] "RemoveStaleState removing state" podUID="d50ee0ad-ade5-4aeb-b30a-6847f68f108a" containerName="cilium-agent" Jan 30 13:29:32.788036 kubelet[3399]: I0130 13:29:32.787671 3399 memory_manager.go:354] "RemoveStaleState removing state" podUID="48105e32-c92e-40a6-b065-786ce3a90f69" containerName="cilium-operator" Jan 30 13:29:32.798879 systemd[1]: Created slice kubepods-burstable-pod7953f786_8e43_402e_bb90_7bc44ed202bd.slice - libcontainer container kubepods-burstable-pod7953f786_8e43_402e_bb90_7bc44ed202bd.slice. Jan 30 13:29:32.852585 sshd[5131]: Connection closed by 10.200.16.10 port 43576 Jan 30 13:29:32.853213 sshd-session[5129]: pam_unix(sshd:session): session closed for user core Jan 30 13:29:32.856979 systemd[1]: sshd@22-10.200.20.21:22-10.200.16.10:43576.service: Deactivated successfully. Jan 30 13:29:32.859634 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:29:32.863006 systemd-logind[1690]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:29:32.866663 systemd-logind[1690]: Removed session 25. Jan 30 13:29:32.875408 kubelet[3399]: I0130 13:29:32.875066 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7953f786-8e43-402e-bb90-7bc44ed202bd-host-proc-sys-net\") pod \"cilium-rwhdn\" (UID: \"7953f786-8e43-402e-bb90-7bc44ed202bd\") " pod="kube-system/cilium-rwhdn" Jan 30 13:29:32.875408 kubelet[3399]: I0130 13:29:32.875109 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7953f786-8e43-402e-bb90-7bc44ed202bd-cni-path\") pod \"cilium-rwhdn\" (UID: \"7953f786-8e43-402e-bb90-7bc44ed202bd\") " pod="kube-system/cilium-rwhdn" Jan 30 13:29:32.875408 kubelet[3399]: I0130 13:29:32.875128 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5km6\" (UniqueName: \"kubernetes.io/projected/7953f786-8e43-402e-bb90-7bc44ed202bd-kube-api-access-h5km6\") pod \"cilium-rwhdn\" (UID: \"7953f786-8e43-402e-bb90-7bc44ed202bd\") " pod="kube-system/cilium-rwhdn" Jan 30 13:29:32.875408 kubelet[3399]: I0130 13:29:32.875143 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7953f786-8e43-402e-bb90-7bc44ed202bd-bpf-maps\") pod \"cilium-rwhdn\" (UID: \"7953f786-8e43-402e-bb90-7bc44ed202bd\") " pod="kube-system/cilium-rwhdn" Jan 30 13:29:32.875408 kubelet[3399]: I0130 13:29:32.875159 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7953f786-8e43-402e-bb90-7bc44ed202bd-xtables-lock\") pod \"cilium-rwhdn\" (UID: \"7953f786-8e43-402e-bb90-7bc44ed202bd\") " pod="kube-system/cilium-rwhdn" Jan 30 13:29:32.875408 kubelet[3399]: I0130 13:29:32.875173 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7953f786-8e43-402e-bb90-7bc44ed202bd-host-proc-sys-kernel\") pod \"cilium-rwhdn\" (UID: \"7953f786-8e43-402e-bb90-7bc44ed202bd\") " pod="kube-system/cilium-rwhdn" Jan 30 13:29:32.875648 kubelet[3399]: I0130 13:29:32.875187 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7953f786-8e43-402e-bb90-7bc44ed202bd-hostproc\") pod \"cilium-rwhdn\" (UID: \"7953f786-8e43-402e-bb90-7bc44ed202bd\") " pod="kube-system/cilium-rwhdn" Jan 30 13:29:32.875648 kubelet[3399]: I0130 13:29:32.875206 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7953f786-8e43-402e-bb90-7bc44ed202bd-cilium-run\") pod \"cilium-rwhdn\" (UID: \"7953f786-8e43-402e-bb90-7bc44ed202bd\") " pod="kube-system/cilium-rwhdn" Jan 30 13:29:32.875648 kubelet[3399]: I0130 13:29:32.875221 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7953f786-8e43-402e-bb90-7bc44ed202bd-cilium-cgroup\") pod \"cilium-rwhdn\" (UID: \"7953f786-8e43-402e-bb90-7bc44ed202bd\") " pod="kube-system/cilium-rwhdn" Jan 30 13:29:32.875648 kubelet[3399]: I0130 13:29:32.875236 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7953f786-8e43-402e-bb90-7bc44ed202bd-lib-modules\") pod \"cilium-rwhdn\" (UID: \"7953f786-8e43-402e-bb90-7bc44ed202bd\") " pod="kube-system/cilium-rwhdn" Jan 30 13:29:32.875648 kubelet[3399]: I0130 13:29:32.875251 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7953f786-8e43-402e-bb90-7bc44ed202bd-cilium-config-path\") pod \"cilium-rwhdn\" (UID: \"7953f786-8e43-402e-bb90-7bc44ed202bd\") " pod="kube-system/cilium-rwhdn" Jan 30 13:29:32.875648 kubelet[3399]: I0130 13:29:32.875266 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7953f786-8e43-402e-bb90-7bc44ed202bd-hubble-tls\") pod \"cilium-rwhdn\" (UID: \"7953f786-8e43-402e-bb90-7bc44ed202bd\") " pod="kube-system/cilium-rwhdn" Jan 30 13:29:32.875777 kubelet[3399]: I0130 13:29:32.875280 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7953f786-8e43-402e-bb90-7bc44ed202bd-etc-cni-netd\") pod \"cilium-rwhdn\" (UID: \"7953f786-8e43-402e-bb90-7bc44ed202bd\") " pod="kube-system/cilium-rwhdn" Jan 30 13:29:32.875777 kubelet[3399]: I0130 13:29:32.875296 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7953f786-8e43-402e-bb90-7bc44ed202bd-clustermesh-secrets\") pod \"cilium-rwhdn\" (UID: \"7953f786-8e43-402e-bb90-7bc44ed202bd\") " pod="kube-system/cilium-rwhdn" Jan 30 13:29:32.875777 kubelet[3399]: I0130 13:29:32.875310 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7953f786-8e43-402e-bb90-7bc44ed202bd-cilium-ipsec-secrets\") pod \"cilium-rwhdn\" (UID: \"7953f786-8e43-402e-bb90-7bc44ed202bd\") " pod="kube-system/cilium-rwhdn" Jan 30 13:29:32.941225 systemd[1]: Started sshd@23-10.200.20.21:22-10.200.16.10:43590.service - OpenSSH per-connection server daemon (10.200.16.10:43590). Jan 30 13:29:33.104293 containerd[1750]: time="2025-01-30T13:29:33.103834065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rwhdn,Uid:7953f786-8e43-402e-bb90-7bc44ed202bd,Namespace:kube-system,Attempt:0,}" Jan 30 13:29:33.140461 containerd[1750]: time="2025-01-30T13:29:33.139884371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:29:33.140665 containerd[1750]: time="2025-01-30T13:29:33.140616970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:29:33.140847 containerd[1750]: time="2025-01-30T13:29:33.140715930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:29:33.140935 containerd[1750]: time="2025-01-30T13:29:33.140833450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:29:33.163141 systemd[1]: Started cri-containerd-c841b25d9e194d74261165016476e85b0735bdac2c67c7f549cdd76f1de4a6f6.scope - libcontainer container c841b25d9e194d74261165016476e85b0735bdac2c67c7f549cdd76f1de4a6f6. Jan 30 13:29:33.186618 containerd[1750]: time="2025-01-30T13:29:33.186582592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rwhdn,Uid:7953f786-8e43-402e-bb90-7bc44ed202bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c841b25d9e194d74261165016476e85b0735bdac2c67c7f549cdd76f1de4a6f6\"" Jan 30 13:29:33.191400 containerd[1750]: time="2025-01-30T13:29:33.191357950Z" level=info msg="CreateContainer within sandbox \"c841b25d9e194d74261165016476e85b0735bdac2c67c7f549cdd76f1de4a6f6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:29:33.222177 containerd[1750]: time="2025-01-30T13:29:33.222096378Z" level=info msg="CreateContainer within sandbox \"c841b25d9e194d74261165016476e85b0735bdac2c67c7f549cdd76f1de4a6f6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"146ec42e45b1e2b686883ab5f7dba4cac1c10fd23e5c2363f2261ff65bc8d396\"" Jan 30 13:29:33.223842 containerd[1750]: time="2025-01-30T13:29:33.222970617Z" level=info msg="StartContainer for \"146ec42e45b1e2b686883ab5f7dba4cac1c10fd23e5c2363f2261ff65bc8d396\"" Jan 30 13:29:33.254145 systemd[1]: Started cri-containerd-146ec42e45b1e2b686883ab5f7dba4cac1c10fd23e5c2363f2261ff65bc8d396.scope - libcontainer container 146ec42e45b1e2b686883ab5f7dba4cac1c10fd23e5c2363f2261ff65bc8d396. Jan 30 13:29:33.286578 containerd[1750]: time="2025-01-30T13:29:33.285809432Z" level=info msg="StartContainer for \"146ec42e45b1e2b686883ab5f7dba4cac1c10fd23e5c2363f2261ff65bc8d396\" returns successfully" Jan 30 13:29:33.286111 systemd[1]: cri-containerd-146ec42e45b1e2b686883ab5f7dba4cac1c10fd23e5c2363f2261ff65bc8d396.scope: Deactivated successfully. Jan 30 13:29:33.342936 containerd[1750]: time="2025-01-30T13:29:33.342854889Z" level=info msg="shim disconnected" id=146ec42e45b1e2b686883ab5f7dba4cac1c10fd23e5c2363f2261ff65bc8d396 namespace=k8s.io Jan 30 13:29:33.342936 containerd[1750]: time="2025-01-30T13:29:33.342931649Z" level=warning msg="cleaning up after shim disconnected" id=146ec42e45b1e2b686883ab5f7dba4cac1c10fd23e5c2363f2261ff65bc8d396 namespace=k8s.io Jan 30 13:29:33.342936 containerd[1750]: time="2025-01-30T13:29:33.342942369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:29:33.369065 sshd[5140]: Accepted publickey for core from 10.200.16.10 port 43590 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:29:33.370413 sshd-session[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:29:33.374988 systemd-logind[1690]: New session 26 of user core. Jan 30 13:29:33.380113 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:29:33.686712 sshd[5251]: Connection closed by 10.200.16.10 port 43590 Jan 30 13:29:33.687647 sshd-session[5140]: pam_unix(sshd:session): session closed for user core Jan 30 13:29:33.691355 systemd[1]: sshd@23-10.200.20.21:22-10.200.16.10:43590.service: Deactivated successfully. Jan 30 13:29:33.694425 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:29:33.695473 systemd-logind[1690]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:29:33.696730 systemd-logind[1690]: Removed session 26. Jan 30 13:29:33.763233 systemd[1]: Started sshd@24-10.200.20.21:22-10.200.16.10:43606.service - OpenSSH per-connection server daemon (10.200.16.10:43606). Jan 30 13:29:34.007035 kubelet[3399]: E0130 13:29:34.006904 3399 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:29:34.174713 sshd[5257]: Accepted publickey for core from 10.200.16.10 port 43606 ssh2: RSA SHA256:C5pjVMYzONmJhds0jUZO5MZNsVxbc+yYzbKaFYsva20 Jan 30 13:29:34.176425 sshd-session[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:29:34.184443 systemd-logind[1690]: New session 27 of user core. Jan 30 13:29:34.189112 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 13:29:34.328760 containerd[1750]: time="2025-01-30T13:29:34.328597612Z" level=info msg="CreateContainer within sandbox \"c841b25d9e194d74261165016476e85b0735bdac2c67c7f549cdd76f1de4a6f6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:29:34.434794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2379165460.mount: Deactivated successfully. Jan 30 13:29:34.449365 containerd[1750]: time="2025-01-30T13:29:34.449227084Z" level=info msg="CreateContainer within sandbox \"c841b25d9e194d74261165016476e85b0735bdac2c67c7f549cdd76f1de4a6f6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7fcccdcf7fb8ef6bb0af746dde2f84095ce53c29f327cc7899fcb6f97cb82a43\"" Jan 30 13:29:34.451620 containerd[1750]: time="2025-01-30T13:29:34.449966324Z" level=info msg="StartContainer for \"7fcccdcf7fb8ef6bb0af746dde2f84095ce53c29f327cc7899fcb6f97cb82a43\"" Jan 30 13:29:34.483176 systemd[1]: Started cri-containerd-7fcccdcf7fb8ef6bb0af746dde2f84095ce53c29f327cc7899fcb6f97cb82a43.scope - libcontainer container 7fcccdcf7fb8ef6bb0af746dde2f84095ce53c29f327cc7899fcb6f97cb82a43. Jan 30 13:29:34.520173 containerd[1750]: time="2025-01-30T13:29:34.519936335Z" level=info msg="StartContainer for \"7fcccdcf7fb8ef6bb0af746dde2f84095ce53c29f327cc7899fcb6f97cb82a43\" returns successfully" Jan 30 13:29:34.528210 systemd[1]: cri-containerd-7fcccdcf7fb8ef6bb0af746dde2f84095ce53c29f327cc7899fcb6f97cb82a43.scope: Deactivated successfully. Jan 30 13:29:34.607084 containerd[1750]: time="2025-01-30T13:29:34.606939660Z" level=info msg="shim disconnected" id=7fcccdcf7fb8ef6bb0af746dde2f84095ce53c29f327cc7899fcb6f97cb82a43 namespace=k8s.io Jan 30 13:29:34.607564 containerd[1750]: time="2025-01-30T13:29:34.607270460Z" level=warning msg="cleaning up after shim disconnected" id=7fcccdcf7fb8ef6bb0af746dde2f84095ce53c29f327cc7899fcb6f97cb82a43 namespace=k8s.io Jan 30 13:29:34.607564 containerd[1750]: time="2025-01-30T13:29:34.607457220Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:29:34.983233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fcccdcf7fb8ef6bb0af746dde2f84095ce53c29f327cc7899fcb6f97cb82a43-rootfs.mount: Deactivated successfully. Jan 30 13:29:35.333013 containerd[1750]: time="2025-01-30T13:29:35.332966488Z" level=info msg="CreateContainer within sandbox \"c841b25d9e194d74261165016476e85b0735bdac2c67c7f549cdd76f1de4a6f6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:29:35.374066 containerd[1750]: time="2025-01-30T13:29:35.374013672Z" level=info msg="CreateContainer within sandbox \"c841b25d9e194d74261165016476e85b0735bdac2c67c7f549cdd76f1de4a6f6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"30c44a610b89892985d03babb9be30c59372f454e623eadca446ef9ed0702e14\"" Jan 30 13:29:35.376395 containerd[1750]: time="2025-01-30T13:29:35.375115631Z" level=info msg="StartContainer for \"30c44a610b89892985d03babb9be30c59372f454e623eadca446ef9ed0702e14\"" Jan 30 13:29:35.403099 systemd[1]: Started cri-containerd-30c44a610b89892985d03babb9be30c59372f454e623eadca446ef9ed0702e14.scope - libcontainer container 30c44a610b89892985d03babb9be30c59372f454e623eadca446ef9ed0702e14. Jan 30 13:29:35.432547 systemd[1]: cri-containerd-30c44a610b89892985d03babb9be30c59372f454e623eadca446ef9ed0702e14.scope: Deactivated successfully. Jan 30 13:29:35.437566 containerd[1750]: time="2025-01-30T13:29:35.437503046Z" level=info msg="StartContainer for \"30c44a610b89892985d03babb9be30c59372f454e623eadca446ef9ed0702e14\" returns successfully" Jan 30 13:29:35.466406 containerd[1750]: time="2025-01-30T13:29:35.466292555Z" level=info msg="shim disconnected" id=30c44a610b89892985d03babb9be30c59372f454e623eadca446ef9ed0702e14 namespace=k8s.io Jan 30 13:29:35.466406 containerd[1750]: time="2025-01-30T13:29:35.466356075Z" level=warning msg="cleaning up after shim disconnected" id=30c44a610b89892985d03babb9be30c59372f454e623eadca446ef9ed0702e14 namespace=k8s.io Jan 30 13:29:35.466406 containerd[1750]: time="2025-01-30T13:29:35.466365195Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:29:35.476962 containerd[1750]: time="2025-01-30T13:29:35.476177871Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:29:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:29:35.983246 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30c44a610b89892985d03babb9be30c59372f454e623eadca446ef9ed0702e14-rootfs.mount: Deactivated successfully. Jan 30 13:29:36.336876 containerd[1750]: time="2025-01-30T13:29:36.336391324Z" level=info msg="CreateContainer within sandbox \"c841b25d9e194d74261165016476e85b0735bdac2c67c7f549cdd76f1de4a6f6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:29:36.371066 containerd[1750]: time="2025-01-30T13:29:36.371013031Z" level=info msg="CreateContainer within sandbox \"c841b25d9e194d74261165016476e85b0735bdac2c67c7f549cdd76f1de4a6f6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0f52aa3f9526906cc93b8a3b19dd129d7c702d790383717fb1a039f3f58ce5ef\"" Jan 30 13:29:36.371858 containerd[1750]: time="2025-01-30T13:29:36.371593150Z" level=info msg="StartContainer for \"0f52aa3f9526906cc93b8a3b19dd129d7c702d790383717fb1a039f3f58ce5ef\"" Jan 30 13:29:36.405120 systemd[1]: Started cri-containerd-0f52aa3f9526906cc93b8a3b19dd129d7c702d790383717fb1a039f3f58ce5ef.scope - libcontainer container 0f52aa3f9526906cc93b8a3b19dd129d7c702d790383717fb1a039f3f58ce5ef. Jan 30 13:29:36.428003 systemd[1]: cri-containerd-0f52aa3f9526906cc93b8a3b19dd129d7c702d790383717fb1a039f3f58ce5ef.scope: Deactivated successfully. Jan 30 13:29:36.434615 containerd[1750]: time="2025-01-30T13:29:36.434530245Z" level=info msg="StartContainer for \"0f52aa3f9526906cc93b8a3b19dd129d7c702d790383717fb1a039f3f58ce5ef\" returns successfully" Jan 30 13:29:36.462619 containerd[1750]: time="2025-01-30T13:29:36.462545714Z" level=info msg="shim disconnected" id=0f52aa3f9526906cc93b8a3b19dd129d7c702d790383717fb1a039f3f58ce5ef namespace=k8s.io Jan 30 13:29:36.462619 containerd[1750]: time="2025-01-30T13:29:36.462609514Z" level=warning msg="cleaning up after shim disconnected" id=0f52aa3f9526906cc93b8a3b19dd129d7c702d790383717fb1a039f3f58ce5ef namespace=k8s.io Jan 30 13:29:36.462619 containerd[1750]: time="2025-01-30T13:29:36.462617994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:29:36.626115 kubelet[3399]: I0130 13:29:36.624945 3399 setters.go:580] "Node became not ready" node="ci-4186.1.0-a-a27a4db638" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T13:29:36Z","lastTransitionTime":"2025-01-30T13:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 13:29:36.983512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f52aa3f9526906cc93b8a3b19dd129d7c702d790383717fb1a039f3f58ce5ef-rootfs.mount: Deactivated successfully. Jan 30 13:29:37.344462 containerd[1750]: time="2025-01-30T13:29:37.344319159Z" level=info msg="CreateContainer within sandbox \"c841b25d9e194d74261165016476e85b0735bdac2c67c7f549cdd76f1de4a6f6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:29:37.369152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154220132.mount: Deactivated successfully. Jan 30 13:29:37.385858 containerd[1750]: time="2025-01-30T13:29:37.385803222Z" level=info msg="CreateContainer within sandbox \"c841b25d9e194d74261165016476e85b0735bdac2c67c7f549cdd76f1de4a6f6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7e8276d03288db9198bd95a0f267651f019994615b3fd878f6ab16cf34a988fc\"" Jan 30 13:29:37.387995 containerd[1750]: time="2025-01-30T13:29:37.387260582Z" level=info msg="StartContainer for \"7e8276d03288db9198bd95a0f267651f019994615b3fd878f6ab16cf34a988fc\"" Jan 30 13:29:37.420112 systemd[1]: Started cri-containerd-7e8276d03288db9198bd95a0f267651f019994615b3fd878f6ab16cf34a988fc.scope - libcontainer container 7e8276d03288db9198bd95a0f267651f019994615b3fd878f6ab16cf34a988fc. Jan 30 13:29:37.450907 containerd[1750]: time="2025-01-30T13:29:37.450771556Z" level=info msg="StartContainer for \"7e8276d03288db9198bd95a0f267651f019994615b3fd878f6ab16cf34a988fc\" returns successfully" Jan 30 13:29:37.771955 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 13:29:38.361408 kubelet[3399]: I0130 13:29:38.361327 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rwhdn" podStartSLOduration=6.36131023 podStartE2EDuration="6.36131023s" podCreationTimestamp="2025-01-30 13:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:29:38.36058383 +0000 UTC m=+174.564774101" watchObservedRunningTime="2025-01-30 13:29:38.36131023 +0000 UTC m=+174.565500501" Jan 30 13:29:40.529359 systemd-networkd[1537]: lxc_health: Link UP Jan 30 13:29:40.539164 systemd-networkd[1537]: lxc_health: Gained carrier Jan 30 13:29:40.806251 systemd[1]: run-containerd-runc-k8s.io-7e8276d03288db9198bd95a0f267651f019994615b3fd878f6ab16cf34a988fc-runc.H6woAO.mount: Deactivated successfully. Jan 30 13:29:41.578117 systemd-networkd[1537]: lxc_health: Gained IPv6LL Jan 30 13:29:43.912112 containerd[1750]: time="2025-01-30T13:29:43.912048092Z" level=info msg="StopPodSandbox for \"dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c\"" Jan 30 13:29:43.912458 containerd[1750]: time="2025-01-30T13:29:43.912181532Z" level=info msg="TearDown network for sandbox \"dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c\" successfully" Jan 30 13:29:43.912458 containerd[1750]: time="2025-01-30T13:29:43.912200132Z" level=info msg="StopPodSandbox for \"dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c\" returns successfully" Jan 30 13:29:43.912860 containerd[1750]: time="2025-01-30T13:29:43.912807172Z" level=info msg="RemovePodSandbox for \"dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c\"" Jan 30 13:29:43.912860 containerd[1750]: time="2025-01-30T13:29:43.912849012Z" level=info msg="Forcibly stopping sandbox \"dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c\"" Jan 30 13:29:43.913009 containerd[1750]: time="2025-01-30T13:29:43.912899812Z" level=info msg="TearDown network for sandbox \"dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c\" successfully" Jan 30 13:29:43.920051 containerd[1750]: time="2025-01-30T13:29:43.919976251Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:29:43.920170 containerd[1750]: time="2025-01-30T13:29:43.920083331Z" level=info msg="RemovePodSandbox \"dd0b338bb67aed06ab029037bba19932c8974d11ce03d1d255b46611f3e5de2c\" returns successfully" Jan 30 13:29:43.920565 containerd[1750]: time="2025-01-30T13:29:43.920511891Z" level=info msg="StopPodSandbox for \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\"" Jan 30 13:29:43.920631 containerd[1750]: time="2025-01-30T13:29:43.920593970Z" level=info msg="TearDown network for sandbox \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\" successfully" Jan 30 13:29:43.920631 containerd[1750]: time="2025-01-30T13:29:43.920604570Z" level=info msg="StopPodSandbox for \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\" returns successfully" Jan 30 13:29:43.921211 containerd[1750]: time="2025-01-30T13:29:43.921093170Z" level=info msg="RemovePodSandbox for \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\"" Jan 30 13:29:43.921282 containerd[1750]: time="2025-01-30T13:29:43.921199890Z" level=info msg="Forcibly stopping sandbox \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\"" Jan 30 13:29:43.921308 containerd[1750]: time="2025-01-30T13:29:43.921278890Z" level=info msg="TearDown network for sandbox \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\" successfully" Jan 30 13:29:43.928331 containerd[1750]: time="2025-01-30T13:29:43.927813609Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:29:43.928331 containerd[1750]: time="2025-01-30T13:29:43.927939849Z" level=info msg="RemovePodSandbox \"649e4b3e7af7b483747cfbf3ad4dee5c094fabe80eddbf5c6e001025b2827564\" returns successfully" Jan 30 13:29:47.269246 kubelet[3399]: E0130 13:29:47.268948 3399 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:50546->127.0.0.1:43175: write tcp 127.0.0.1:50546->127.0.0.1:43175: write: broken pipe Jan 30 13:29:47.355248 sshd[5259]: Connection closed by 10.200.16.10 port 43606 Jan 30 13:29:47.356067 sshd-session[5257]: pam_unix(sshd:session): session closed for user core Jan 30 13:29:47.359395 systemd-logind[1690]: Session 27 logged out. Waiting for processes to exit. Jan 30 13:29:47.362022 systemd[1]: sshd@24-10.200.20.21:22-10.200.16.10:43606.service: Deactivated successfully. Jan 30 13:29:47.365870 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 13:29:47.367682 systemd-logind[1690]: Removed session 27.